article_id
int64 6
10.2M
| title
stringlengths 6
181
| content
stringlengths 1.17k
62.1k
| excerpt
stringlengths 7
938
| categories
stringclasses 18
values | tags
stringlengths 2
806
| author_name
stringclasses 605
values | publish_date
stringdate 2012-05-21 07:44:37
2025-07-11 00:01:12
| publication_year
stringdate 2012-01-01 00:00:00
2025-01-01 00:00:00
| word_count
int64 200
9.08k
| keywords
stringlengths 38
944
| extracted_tech_keywords
stringlengths 32
191
| url
stringlengths 43
244
| complexity_score
int64 1
4
| technical_depth
int64 2
10
| industry_relevance_score
int64 0
7
| has_code_examples
bool 2
classes | has_tutorial_content
bool 2
classes | is_research_content
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
64,823
|
How To Build Your Data Science Competency For A Post-Covid Future
|
The world collectively has been bracing for a change in the job landscape. Driven largely by the emergence of new technologies like data science and artificial intelligence (AI), these changes have already made some jobs redundant. To add to this uncertainty, the catastrophic economic impact of the Covid-19 pandemic has brought in an urgency to upskill oneself to adapt to changing scenarios. While the prognosis does not look good, this could also create the demand for jobs in the field of business analytics. This indicates that heavily investing in data science and AI skills today could mean the difference between you being employed or not tomorrow. By adding more skills to your arsenal today, you can build your core competencies in areas that will be relevant once these turbulent times pass over. This includes sharpening your understanding of business numbers and analysing consumer demands – two domains which businesses will heavily invest in very soon. But motivation alone will not help. You need to first filter through the clutter of online courses that the internet is saturated with. Secondly, you need to create a study plan that ensures that you successfully complete these courses. We have a solution. Developed with the objective of providing you a comprehensive understanding of key concepts that are tailored to align with the jobs of the future, Analytix Labs are launching a series of special short-term courses. These courses will not only help you upskill yourself, it will also ensure that you complete these courses in a matter of a few days. These short-term courses will have similar content as regular ones, but packed in a more efficient way. Whether you are looking for courses in business analytics, applied AI, or data analytics, these should hold you in good stead for jobs of the future. Analytics Edge (Data Visualization & Analytics)About The Course: This all-encompassing data analytics certification course is tailor-made for analytics beginners. It would cover key concepts around data mining, and statistical and predictive modelling skills, and is curated for candidates who have no prior knowledge about data analytics tools. What is more, the inclusion of popular data visualization tool Tableau makes it one of the best courses available on the subject today. Additionally, it also puts an emphasis on widely used analytics tools like R, SQL and Excel, making this course truly unique. Duration: While the original data analytics course this short-term course is developed from includes 180 hours of content and demands an average of 10-15 hours of weekly online classes and self-study. This course will enable you to acquire the same skills, but within a shorter period of time. Target Group: While anyone with an interest in analytics can pursue this course, it is especially targeted at candidates with a background in engineering, finance, math, and business management. It will also be a useful skill-building course for candidates who want to target job profiles based around R programming, statistical analysis, Excel-VBA or tableau-based BI analyst profiles. Data Science Using Python About the course: Adapted to greatly help candidates when searching for data science roles, this certification covers all that they need to know on the subject using Python as the programming language. While other languages like R are also commonly used today, Python has emerged as one of the more popular options within the data science universe. This ‘Python for Data Science’ course will make you proficient in defly handling and visualizing data, and also covers statistical modelling and operations with NumPy. It also integrates these with practical examples and case studies, making it a unique online training data science course in Python. Duration of the course: While the original data science course this short-term course is developed from includes 220 hours of content and demands an average of 15-20 hours of weekly online classes and self-study, this course will enable you to acquire the same skills, but within a shorter period of time. Target Group: While anyone with an interest in analytics can pursue this course, it is especially targeted at candidates with a background of working with data analysis and visualization techniques. It will also help people who want to undergo Python training with advanced analytics skills to help them jumpstart a career in data science. Machine Learning & Artificial IntelligenceAbout this course: This course delves into the applications of AI using ML and is tailor-made for candidates looking to start their journey in the field of data science. It will cover tools and libraries like Python, Numpy, Pandas, Scikit-Learn, NLTK, TextBlob, PyTorch, TensorFlow, and Keras, among others. Thus, after successful completion of this Applied AI course, you will not only be proficient in the theoretical aspects of AI and ML, but will also develop a nuanced understanding of its industry applications. Duration of the course: While the original ML and AI course this short-term course is developed from includes 280 hours of content and demands an average of 8-10 hours of weekly self-study, this Applied AI course will enable you to acquire the same skills, but within a shorter period of time. Target Group: While anyone with an interest in analytics can pursue this course, it is especially targeted at candidates with a background in engineering, finance, math, statistics, and business management. It will also help people who want to acquire AI and machine learning skills to head start their career in the field of data science. Summary While the Covid-19 pandemic has witnessed a partial – or even complete – lockdown at several places across the globe, people have been reorienting their lives indoors. But with no end in sight, it necessitates that professionals turn these circumstances into opportunities to upskill. Given an oncoming recession and economic downturn, it behoves them to adapt to these changes to remain employable in such competitive times. In this setting, Covid-19 could emerge as a tipping point for learning, with virtual learning offering the perfect opportunity to self-learn.
|
The world collectively has been bracing for a change in the job landscape. Driven largely by the emergence of new technologies like data science and artificial intelligence (AI), these changes have already made some jobs redundant. To add to this uncertainty, the catastrophic economic impact of the Covid-19 pandemic has brought in an urgency to […]
|
["AI Trends"]
|
["Applications of Data Mining", "covid-19", "Data analyst jobs", "Data Science", "what is data science"]
|
Anu Thomas
|
2020-05-08T12:00:00
|
2020
| 988
|
["what is data science", "data science", "scikit-learn", "artificial intelligence", "machine learning", "covid-19", "AI", "PyTorch", "Keras", "ML", "Data analyst jobs", "Applications of Data Mining", "analytics", "Data Science", "TensorFlow"]
|
["AI", "artificial intelligence", "machine learning", "ML", "data science", "analytics", "TensorFlow", "PyTorch", "Keras", "scikit-learn"]
|
https://analyticsindiamag.com/ai-trends/how-to-build-your-data-science-competency-for-a-post-covid-future/
| 3
| 10
| 2
| false
| true
| true
|
10,060,860
|
Mathangi Sri appointed as Chief Data Officer of CredAvenue
|
Mathangi Sri has been appointed as the Chief Data Officer at CredAvenue, a debt product suite and marketplace company. Sri joined CredAvenue from Gojek, where she headed data strategy for GoFood and played a key role in building various AI and ML solutions. “I would be building the data strategy encapsulating data sciences, ML engineering, data governance and data engineering. I believe in data as the first principle thinking, and thus will focus on building high-impact data platforms that deliver solid business impacts. Data is at the core of every operation at CredAvenue and I am looking forward to an exciting journey building world-class solutions powering the debt marketplace,” said Mathangi Sri, Chief Data Officer at CredAvenue. Mathangi Sri has over 18 years of proven track record in building world-class data science solutions and products. She has overall 20 patent grants in the area of intuitive customer experience and user profiles. Mathangi has recently published a book called “Practical Natural Language Processing with Python”. “Mathangi’s exceptional experience in data science will help us shape CredAvenue’s journey towards becoming a more futuristic company. We plan to invest significantly in our data platform and empower our customers to manage their transactions actively,” Gaurav Kumar, founder and CEO of CredAvenue said. Mathangi Sri has worked with organisations like Citibank, HSBC, GE and tech startups like 247.ai, PhonePe. She is also an active contributor in the data science community.
|
Mathangi Sri has over 20 patent grants in the area of intuitive customer experience and user profiles.
|
["AI News"]
|
["chief data officer", "gojek", "Mathangi Sri"]
|
SharathKumar Nair
|
2022-02-17T11:06:01
|
2022
| 235
|
["data science", "Go", "Mathangi Sri", "AI", "chief data officer", "gojek", "ML", "Python", "data engineering", "data governance", "GAN", "R", "startup"]
|
["AI", "ML", "data science", "Python", "R", "Go", "data engineering", "data governance", "GAN", "startup"]
|
https://analyticsindiamag.com/ai-news-updates/mathangi-sri-appointed-as-chief-data-officer-of-credavenue/
| 2
| 10
| 3
| true
| false
| false
|
2,077
|
Interview – Ajay Ohri, Author “R for Business Analytics”
|
Ajay Ohri of Decisionstats.com has recently published ‘R for Business Analytics’ with Springer. The book is now available on Amazon at http://www.amazon.com/R-Business-Analytics-A-Ohri/dp/1461443423 The introduction of the book- R for Business Analytics looks at some of the most common tasks performed by business analysts and helps the user navigate the wealth of information in R and its 4000 packages. With this information the reader can select the packages that can help process the analytical tasks with minimum effort and maximum usefulness. This book is aimed at business analysts with basic programming skills for using R for Business Analytics. Note the scope of the book is neither statistical theory nor graduate level research for statistics, but rather it is for business analytics practitioners. In an Interview with Analytics India Magazine, Ajay talks about his experience of writing the book and about his take on R and similar statistical software. [dropcap style=”1″ size=”2″]AIM[/dropcap]Analytics India Magazine: How did you decide to write a book on R especially for Business Analytics professionals? [dropcap style=”1″ size=”2″]AO[/dropcap]Ajay Ohri: I got involved in R in 2007 when I created my startup in business analytics consulting, since I could not afford my existing tool called Base SAS. After learning it for a couple of years, I found that the existing documentation and literature was aimed more at statisticians than at MBAs like me who wanted to learn R for Business Analytics. So I sent a proposal to Springer Publishing and they accepted and so I wrote the book. AIM: What did it take to have a book published? AO: An idea, a good proposal, and 2 years of writing and 6 months of editing. Lots of good luck, and good wishes from my very patient instructors and mentors across the world. AIM: How is R different from other statistical tools available in market? What are its strengths and weaknesses vis-à-vis SAS and SPSS? AO: R is fundamentally different from SAS language (which is divided into procedures and data steps) and the menu driven SPSS. It is object oriented, much more flexible, hence powerful, yet confusing to the novice, as there are multiple ways to do anything in R. It is overall a very elegant language for statistics and the strengths of the language are enhanced by nearly 5000 packages developed by leading brains across the universities of the planet. AIM: Which R packages do you use the most and which ones are your favorites? AO: I use R Commander and Rattle a lot, and I use the dependent packages. I use car for regression, and forecast for time series, and many packages for specific graphs. I have not mastered ggplot though but I do use it sometimes. Overall I am waiting for Hadley Wickham to come up with an updated book to his ecosystem of packages as they are very formidable, completely comprehensive and easy to use in my opinion, so much I can get by the occasional copy and paste code. AIM: What level of adoption do you see for R as a preferred tool in the industry? Are Indian businesses also keen to adopt R? AO: I see surprising growth for R in Business, and I have had to turn down offers for consulting and training as I write my next book R for Cloud Computing. Indian businesses are keen to cut costs like businesses globally, but have an added advantage of having a huge pool of young engineers and quantitatively trained people to choose from. So there is more interest in India for R, but is growing thanks to the efforts of companies like SAP, Oracle, Revolution Analytics and R Studio who have invested in R and are making it more popular. The R Project organization is dominated by academia, and this reflects the fact their priorities is making the software better, faster, stabler but the rest of the community has been making efforts to introduce it to industry. AIM: How did you start your career in analytics and how were you first acquainted with R? AO: I started my career after MBA in selling cars, which was selling a lot of dreams and managing people telling lies to people to sell cars. So I switched to Business Analytics thanks to GE in 2004, and I had the personal good luck of having Shrikant Dash, ex CEO GE Analytics as my first US client. He was a tough guy and taught me a lot. I came to R only after leaving the cozy world of corporate analytics in 2007. AIM: Are you working on any other book right now? AO: I am working on “ R for Cloud Computing” for Springer, besides my usual habit of writing my annual poetry book (which is free) and is tentatively titled “Ulysses in India” . My poetry blog is at http://poemsforkush.com and my technology blog is at http://decisionstats.com and I write there when not writing or pretending to write books. AIM: What do you suggest to new graduates aspiring to get into analytics space? AO: Get in early, pick up multiple languages, pick up business domain knowledge, and work hard. Analytics is very lucrative and high growth career. You can read my writings on analytics by just googling my name. AIM: How do you see Analytics evolving today in the industry as a whole? What are the most important contemporary trends that you see emerging in the Analytics space across the globe? AO: I don’t know how analytics will evolve, but it will grow bigger and more towards the cloud and bigger data sizes. Big Data /Hadoop, Cloud Computing, Business Analytics and Optimization, Text Mining, are some of the buzz words that are currently in fashion. [divider top=”1″] [spoiler title=”Biography of Ajay Ohri” open=”0″ style=”2″] Ajay Ohri is the founder of analytics startup Decisionstats.com. He has pursued graduate studies at the University of Tennessee, Knoxville and the Indian Institute of Management, Lucknow. In addition, Ohri has a mechanical engineering degree from the Delhi College of Engineering. He has interviewed more than 100 practitioners in analytics, including leading members from all the analytics software vendors. Ohri has written almost 1300 articles on his blog, besides guest writing for influential analytics communities. He teaches courses in R through online education and has worked as an analytics consultant in India for the past decade. Ohri was one of the earliest independent analytics consultant in India, and his current research interests include spreading open source analytics, analyzing social media manipulation, simpler interfaces to cloud computing and unorthodox cryptography.[/spoiler]
|
Ajay Ohri of Decisionstats.com has recently published ‘R for Business Analytics’ with Springer. The book is now available on Amazon at http://www.amazon.com/R-Business-Analytics-A-Ohri/dp/1461443423 The introduction of the book- R for Business Analytics looks at some of the most common tasks performed by business analysts and helps the user navigate the wealth of information in R and […]
|
["AI Features"]
|
["Interviews and Discussions", "Oracle Interview"]
|
Дарья
|
2012-11-16T15:50:26
|
2012
| 1,085
|
["big data", "Go", "startup", "programming_languages:R", "AI", "cloud computing", "Oracle Interview", "Aim", "analytics", "GAN", "R", "Interviews and Discussions"]
|
["AI", "analytics", "Aim", "cloud computing", "R", "Go", "big data", "GAN", "startup", "programming_languages:R"]
|
https://analyticsindiamag.com/ai-features/interview-ajay-ohri-author-r-for-business-analytics/
| 4
| 10
| 4
| false
| false
| false
|
30,533
|
How Is AI Guiding The Navigation Of Autonomous Planetary Rovers
|
Source: NASA Whether locating a position of a planet or taking pictures, human dependency in planetary rovers might take long hours. With an advanced AI, researchers are developing deep learning algorithms to perform the necessary image observation and shorten the localization process drastically. This article highlights some of the common challenges in the current navigation system and how the use of machine learning can ensure a better future. It is based on a recent research by the University of Glasgow. Planetary Rover Navigation The Planetary Rover Navigation (PRN) requires a robot to design a route map consisting of a feasible route map from a starting pose to a final destination in an optimal manner. Leading Space research organizations like ISRO, NASA, and JAXA are adopting this technique which aids in finding an appropriate direction. This technique can be divided into two scenarios: Global path planning This technique focuses on finding high-level routes based on prior knowledge of the surroundings and is valid for generating an optimal high-level procedure for a rover to execute. Yet, this method is incomplete for handling the dynamic environment. 2.Local path planning This technique depends upon sensory information to ensure global plans are accomplished exactly and possible collisions are prevented. Overcoming Challenges In Current System Taking into account the execution time, memory over the head, and whether the environment of the search machine is static, dynamic or real-time deterministic, an adaptive feature selection approach to terrain classification, based on the random forest method is presented using a auto-learning framework to train a visual classifier, fundamental aspects of information is extracted from geometric features associated with the terrain. Additionally, learning based fuzzy and neural network access have made an improvement. These applied sciences focus on the accurate navigation of a mobile robot with adjustable speeds while avoiding local minima. A robot is adept of manipulating through the obstacle by self-learning from experience. These methods illustrate how deep learning techniques can be used to overcome problems associated with the unknown and mysterious celestial bodies exploration. Machine Learns To Find A Path: AI In Rovers Since the advancement of AI into space exploration, a number of developed programmes have enhanced the certainty and capability of direction determining procedures. Presently the most crucial area that interests the scientists is the arrangement of high realistic directions for the rovers. Path Finding Algorithms The operation consists of two main steps: graph generation and a pathfinding algorithm. The graph generation problem for terrain topology is acknowledged as a foundation of robotics in space exploration. In this scenario, the route navigation experiments in diverse uninterrupted environments such as known 2D/3D and unknown 2D environments. Each of these experiments has one of the two techniques, skeletonization or cell decomposition. Skeletonization In the skeletonization procedure, a skeleton formed from the uninterrupted environment. This skeleton apprehends the notable topology of the traversable space by defining a graph G=(V, E), where V is a set of vertices that map to a coordinate in the uninterrupted environment and E is the set of edges connecting vertices that are in the line of sight of one another.skeletonization technique can produce two types of uneven grid, namely, a visibility graph or a waypoint graph. Cell Decomposition Cell decomposition technique breaks down the traversable space in the uninterrupted environment into cells. Each cell is commonly represented by a circle or convex polygons that do not contain obstructions. Machines can travel in a straight line between any two coordinates within the same unit. source: JAXA (Japan Aerospace Exploration Agency) A* Search algorithm Further, in the direction finding process, the issue is to return the optimal way to the machine in a dynamic technique. A* is the notable search algorithm for robotics. It was the first algorithm to use a heuristic function to travel a search graph in an optimal first manner, the search develops from the origin node until the objective node is found. A* inspired many modified and improved algorithms. Concluding Note It is widely accepted that exceptional outcomes were seen with the endeavor of AI into more complex and harsh environments. It is fair to say that adaptive, intelligent and more generalized methods will play a crucial role in gearing up the planetary rovers with the essential facilities to interact with the environment in a truly sovereign way. Though it still remains a challenge for complete accuracy, with the day to day advancements in adaptive self-learning systems the future of the space rovers is assured with accuracy.
|
Whether locating a position of a planet or taking pictures, human dependency in planetary rovers might take long hours. With an advanced AI, researchers are developing deep learning algorithms to perform the necessary image observation and shorten the localization process drastically. This article highlights some of the common challenges in the current navigation system and […]
|
["AI Features"]
|
["autonomous systems"]
|
Bharat Adibhatla
|
2018-11-22T09:21:45
|
2018
| 746
|
["ai_frameworks:JAX", "Go", "machine learning", "programming_languages:R", "AI", "neural network", "autonomous systems", "deep learning", "JAX", "GAN", "R"]
|
["AI", "machine learning", "deep learning", "neural network", "JAX", "R", "Go", "GAN", "ai_frameworks:JAX", "programming_languages:R"]
|
https://analyticsindiamag.com/ai-features/how-is-ai-guiding-the-navigation-of-autonomous-planetary-rovers/
| 3
| 10
| 0
| true
| false
| true
|
50,991
|
Data Privacy: How Big Tech Companies Like Facebook Cross The Line
|
The last few years, we have seen data privacy issues becoming mainstream. In many cases, big tech companies have been found to have mishandled consumer data, or mining data without their consent. The case for data privacy is becoming even more relevant as we move into the age of AI. The argument is hot, and tech companies are already being put to centre stage. Large tech companies including Facebook, Google and Amazon have found multiple critics yet it seems there is a race among both companies and nations to acquire data as it has been consistently touted as the new-age oil, and possibly more powerful than it. But, constant privacy and data breaches have made put to the forefront the importance of privacy, especially in the West. Companies Hungry For Hyper-Personalisation At The Expense Of Privacy There is a major aspect to data usage on the consumer side. Companies are hungry for hyper-personalisation, meaning to gain a competitive edge they want to know everything about a particular customer, his needs and behaviours on a given tech platform to make useful recommendations. According to Hemant Misra, Head of Applied Research, there needs to be a balance between hyper-personalisation for customer experience and ethical data usage. “None of the consumer tech companies with user data, except for Google, does hyper-personalisation to the extent that we have complete 360 degree of any particular user. So, we are looking at the data and do clustering in order to understand who are the other people who are similar users, their choices and needs and build recommender systems for better customer experience. The problem is when data gets joined; when Facebook acquire WhatsApp, it gave the company analytics better view on analytics across the two platforms, user devices, their social status, the places they are visiting by tracking all that using WhatsApp and joining it with the Facebook social media platforms. So, the problem is that more hyperper-sonalisation, more data is being collected and that can lead to misuse. We saw what happened with Facebook- Cambridge Analytica scandal which exposed the misuse on the Facebook platform,” Misra explained while speaking at ThoughtWorks Live 2019. The problem is that more hyper-personalisation, more data is being collected and that can lead to misuse. We saw what happened with Facebook- Cambridge Analytica scandal which exposed the misuse on the Facebook platform.Hemant Misra, Head of Applied Research, Swiggy The question that lies is where it is all headed, and what is the end goal of the data being collected by the governments. Global AI technology race is another big aspect to it. According to Sudhir Tiwari, Managing Director of ThoughtWorks India, at a time when a Go champion retired because he can’t find a way to beat AI technology, it shows the power and dominance that data and AI is bringing on. “There could also be data arms race among countries, where some countries are more aggressive on data collection and algorithms which can generate insights much better. More importantly, players can’t decipher the precise strategy used by AI to dominate the game so convincingly. The same can be said about AI’s role in global power and influence, and AI needs more and data at the expense of privacy. Unless there is a global consensus on data collection and usage, privacy and potential misuse of data will remain a challenge,” says Sudhir. Why Data Privacy Will Be Valuable In Future There is also a huge debate on who owns the data. While users of web services are generating petabytes of data, they may have no control over it. Users are also the victims of data misuse in the form of surveillance and social engineering which may be influencing every aspect of human life, experts say. “As consumers lose trust because of data breaches, they will start looking for alternate products. The future will be different and data privacy will be valuable. We have seen in the last 5 years how big tech companies have refused to be responsible about how they handle personal data, and so they will face the consequence of losing consumer trust. At the same time, they might even go beyond and try to bank on surveillance capitalism in the coming years. But, I think the true next phase of data revolution can only happen with transparency and security standards of user data, with a focus on good tech,” spoke Govind Shivkumar, Principal, Beneficial Technology at Omidyar Network. Jaspreet Bindra, Author of The Tech Whisperer and Digital Transformation Consultant says there is no free lunch. “On Internet, we are used to getting things for free. We forget that if something is free, then you are the product. Without reading the user terms and conditions, users unknowingly give consent to companies on how their data can be used, especially in geographies where data regulations are not present currently,” Bindra told. On Internet, we are used to getting things for free. We forget that if something is free, then you are the product. Without reading the user terms and conditions, users unknowingly give consent to companies on how their data can be used.Jaspreet Bindra, Author & Digital Transformation Consultant
|
The last few years, we have seen data privacy issues becoming mainstream. In many cases, big tech companies have been found to have mishandled consumer data, or mining data without their consent. The case for data privacy is becoming even more relevant as we move into the age of AI. The argument is hot, and […]
|
["AI Features"]
|
["Big Tech", "Data Privacy", "data protection", "surveillance", "tech companies", "what is big data"]
|
Vishal Chawla
|
2019-12-02T18:00:00
|
2019
| 860
|
["Go", "API", "what is big data", "surveillance", "AI", "R", "digital transformation", "programming_languages:R", "programming_languages:Go", "Git", "Data Privacy", "Big Tech", "analytics", "Rust", "tech companies", "data protection"]
|
["AI", "analytics", "R", "Go", "Rust", "Git", "API", "digital transformation", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-features/data-privacy-big-tech-companies-facebook/
| 3
| 10
| 0
| false
| false
| false
|
10,097,178
|
Infosys Signs Mega AI Deal Worth $2B, Shares Rise Over 3%
|
India’s second-largest software services exporter Infosys has informed the stock exchanges that it has entered into a new agreement with an undisclosed established client to provide AI and automation services over a period of five years. The partnership has an estimated target spend of $2 billion. The announcement pushed up the company’s stock price up by 3.6% on the Bombay Stock Exchange (BSE). “Infosys has entered into framework agreement with one of its existing strategic clients to provide AI and automation led development, modernisation and maintenance services. The total client target spend over 5 years is estimated at USD 2 billion,” the company said in an exchange filing on Monday. The news comes three days before July 20 when Infosys is scheduled to release the results of its June quarter (Q1FY24). In accordance with the company’s exchange filing, the agreement includes the advancement, modernization, and upkeep of AI and automation-related services. Notably, the IT giant had recently unveiled a wide-ranging and cost-free AI certification training initiative through Infosys Springboard. This program aims to equip individuals with skills required to thrive in the future job landscape. Infosys’ AI move underscores the growing trend of Indian IT companies increasing their investments in the field of AI. Long before OpenAI’s ChatGPT hit the scene, Tech Mahindra was already working with generative AI. Notably, the IT behemoth’s chief executive, CP Gurnani, lauded the Storicool platform—an auto content creation tool that proved ahead of its time. In line with this trajectory, Tata Consultancy Services (TCS) made headlines with its own foray into generative AI capabilities, joining forces with Google Cloud. Furthermore, Wipro, too, has entered a partnership with Google Cloud to harness the power of its generative AI tools, integrating them with their in-house AI models, business accelerators, and pre-built industry solutions, as per the company’s announcement. This signifies a competitive shift within the Indian IT landscape. Read more: How Indian IT Giants are Bringing GenAI to Their Clients
|
The undisclosed established client will provide AI and automation services over a period of five years.
|
["AI News"]
|
["AI India", "BSE", "India AI", "Indian IT", "Infosys", "TCS", "Wipro"]
|
Tasmia Ansari
|
2023-07-19T11:56:02
|
2023
| 324
|
["India AI", "Wipro", "ChatGPT", "GenAI", "Go", "Infosys", "AI", "OpenAI", "R", "BSE", "GPT", "Ray", "Aim", "generative AI", "AI India", "Indian IT", "TCS"]
|
["AI", "generative AI", "GenAI", "ChatGPT", "OpenAI", "Aim", "Ray", "R", "Go", "GPT"]
|
https://analyticsindiamag.com/ai-news-updates/infosys-signs-mega-ai-deal-worth-2b-shares-rise-over-3/
| 2
| 10
| 2
| false
| false
| false
|
10,014,327
|
Implications Of Allowing Private Sector Into Indian Space Industry
|
The Department of Space recently released a draft for a new space policy, that eased the regulations on private entities to participate in space-based activities. The policy wants to promote the participation of the private industry in India to provide space-based communication, both within the country and outside, to fulfil the increasing demand for satellite bandwidth. The government thinks that private entities can play a significant role in addressing the growing demand within India and also use the opportunity to make a mark in the international space communication market. The article discusses what doors the policy opened for private companies and the possible socio-economic implications for India. Benefits of including private players Until now, the private sector largely worked in a subcontractor role with ISRO, and there was no independent actor outside the public sector. However, if the new policy is passed, private companies will be allowed to establish and operate satellite systems to provide capacity for communication. They will also allow procuring non-Indian orbital resources to build their space-based systems for communication services in and outside India. Alongside, ISRO will make its facilities and other relevant assets available to improve their capacities. The authorisation for this, however, will be overlooked by a government regulator IN-SPACe — a regulatory body under the Department of Space. Positive outcomes of the policy To harness the enormous potential of space opportunities both domestically and worldwide, the Indian space economy needs to scale up. There are a lot of untapped potentials that the space industry can explore given the increasing number of internet users in India. Experts also argue that in order to cater to this increasing demand, it is imperative to look beyond the traditional modes of internet delivery and look for space-based solutions. With the given infrastructure and knowledge already available through India’s space program and the vast amount of potential and resources the private sector has to offer, the new policy could help the space industry to grow and fill in the communication infrastructure deficit. Private players in India and abroad are already looking forward to participating. As a matter of fact, AWS, Amazon’s cloud arm, has recently announced a new business segment ‘Aerospace and Satellite Solutions’ to overlook the innovations in the satellite industry. Alongside, Indian firms like Sankhya labs is also looking forward to investing. At the same time, the availability and demonstration of emerging technologies have a great significance in defining the modern-day geopolitics. Hence, given the current geopolitical situation of the country and the security threats, growth in the space sector can help the country gain leverage over others. Negative consequences of the policy Space technology is expensive and needs heavy investment. This kind of lucrative power is available only with selected rich corporates, thus can lead to monopolisation of the sector. Also, IN-SPACe’s role has been defined as a government regulator, ‘to provide a level-playing field’ for everyone. However, in the past, this has resulted in the governments favouring the private sector over the public sector or leaning towards specific private brands. ISRO, since its inception, has always aimed to work on projects that can help India become self-reliant. The space program always worked on applications like remote sensing, tracking of land use, resource mapping, among others. However, private companies will have more profitable interests than developing solutions that cater to the immediate socio-economic needs of the country. Hence, if a situation were to arise where private companies are establishing space monopoly or gain unfair advantages from government regulators, space applications for social development will take a backseat and the public sector may not survive or slowly become irrelevant. The telecommunication sector is a case in point. Wrapping Up India has successfully demonstrated its abilities to carry out space research and projects. With this proposed new policy for space, India wants to tap into the private sector, which could help the industry grow. While that is the case, unregulated participation of the private industry in the space sector will not only have socio-economic repercussions but also might end up undermining the work that ISRO has been successfully doing for over five decades. Since private space activities will significantly increase if the policy is accepted, India needs to develop a robust legislative framework for space to ensure sustainable and inclusive growth.
|
The Department of Space recently released a draft for a new space policy, that eased the regulations on private entities to participate in space-based activities. The policy wants to promote the participation of the private industry in India to provide space-based communication, both within the country and outside, to fulfil the increasing demand for satellite […]
|
["IT Services"]
|
["IN-SPACe", "ISRO"]
|
Kashyap Raibagi
|
2020-12-15T11:00:00
|
2020
| 714
|
["Go", "ISRO", "AWS", "AI", "cloud_platforms:AWS", "innovation", "programming_languages:R", "RAG", "IN-SPACe", "Aim", "ViT", "R"]
|
["AI", "Aim", "RAG", "AWS", "R", "Go", "ViT", "innovation", "cloud_platforms:AWS", "programming_languages:R"]
|
https://analyticsindiamag.com/it-services/implications-of-allowing-private-sector-into-indian-space-industry/
| 2
| 10
| 3
| false
| false
| false
|
10,056,800
|
Hands-On Guide to Hugging Face PerceiverIO for Text Classification
|
Nowadays, most deep learning models are highly optimized for a specific type of dataset. Computer vision and audio analysis can not use architectures that are good at processing textual data. This level of specialization naturally influences the development of models that are highly specialized in one task and unable to adapt to other tasks. So, in contrast to the General Purpose model, we will talk about PerceiverIO, which is designed to address a wide range of tasks with a single architecture. The following are the main points to be discussed in this article. Table of Contents What is Perceiver IO?Architecture of PerceiverIOImplementing Perceiver IO for Text Classification Let’s start the discussion by understanding the PerceiverIO. What is PerceiverIO? A perceiver is a transformer that can handle non-textual data like images, sounds, and video, as well as spatial data. Other significant systems that came before Perceiver, such as BERT and GPT-3, are based on transformers. It uses an asymmetric attention technique to condense inputs into a latent bottleneck, allowing it to learn from a great amount of disparate data. On classification challenges, Perceiver matches or outperforms specialized models. The perceiver is free of modality-specific components. It lacks components dedicated to handling photos, text, or audio, for example. It can also handle several associated input streams of varying sorts. It takes advantage of a small number of latent units to create an attention bottleneck through which inputs must pass. One advantage is that it eliminates the quadratic scaling issue that plagued early transformers. For each modality, specialized feature extractors were employed previously. Perceiver IO can query the model’s latent space in a variety of ways to generate outputs of any size and semantics. It excels at activities that need structured output spaces, such as natural language and visual comprehension and multitasking. Perceiver IO matches a Transformer-based BERT baseline without the need for input tokenization on the GLUE language benchmark and achieves state-of-the-art performance on Sintel optical flow estimation. The latent array is attended to using a specific output query associated with that particular output to produce outputs. To predict optical flow on a single pixel, for example, a query would use the pixel’s XY coordinates along with an optical flow task embedding to generate a single flow vector. It’s a spin-off of the encoder/decoder architecture seen in other projects. Architecture of PerceiverIO The Perceiver IO model is based on the Perceiver architecture, which achieves cross-domain generality by assuming a simple 2D byte array as input: a set of elements (which could be pixels or patches in vision, characters or words in a language or some form of learned or unlearned embedding), each described by a feature vector. The model then uses Transformer-style attention to encode information about the input array using a smaller number of latent feature vectors, followed by iterative processing and a final aggregation down to a category label. HuggingFace Transformers’ PerceiverModel class serves as the foundation for all Perceiver variants. To initialize a PerceiverModel, three further instances can be specified – a preprocessor, a decoder, and a postprocessor. A preprocessor is optionally used to preprocess the inputs (which might be any modality or a mix of modalities). The preprocessed inputs are then utilized to execute a cross-attention operation utilizing the latent variables of the Perceiver encoder. Source Perceiver IO is a domain-agnostic process that maps arbitrary input arrays to arbitrary output arrays. The majority of the computation takes place in a latent space that is typically smaller than the inputs and outputs, making the process computationally tractable even when the inputs and outputs are very large. In this technique (Referring to the above architecture), the latent variables create queries (Q), whilst the preprocessed inputs generate keys and values (KV). Following this, the Perceiver encoder updates the latent embeddings with a (repeatable) block of self-attention layers. Finally, the encoder will create a shape tensor (batch size, num latents, d latents) containing the latents’ most recently concealed states. Then there’s an optional decoder, which may be used to turn the final concealed states of the latent into something more helpful, like classification logits. This is performed by a cross-attention operation in which trainable embeddings create queries (Q) and latent generate keys and values (KV). PerceiverIO for Text Classification In this section, we will see how perceiver can be used to do the text classification. Now let’s install the Transformer and datasets module of Huggingface. ! pip install -q git+https://github.com/huggingface/transformers.git ! pip install -q datasets Next, we will prepare the data from the module. The dataset is about IMDB movie reviews and we are using a chunk of it. Later after loading the dataset, we will make it handy when doing the inference. from datasets import load_dataset # load the dataset train_ds, test_ds = load_dataset("imdb", split=['train[:100]+train[-100:]', 'test[:5]+test[-5:]']) # making the dataset handy labels = train_ds.features['label'].names print(labels) id2label = {idx:label for idx, label in enumerate(labels)} label2id = {label:idx for idx, label in enumerate(labels)} print(id2label) Output In this step, we will preprocess the dataset for tokenization. For that, we are using PerceiverTokenizer on both train and test datasets. # Tikenization from transformers import PerceiverTokenizer tokenizer = PerceiverTokenizer.from_pretrained("deepmind/language-perceiver") train_ds = train_ds.map(lambda examples: tokenizer(examples['text'], padding="max_length", truncation=True), batched=True) test_ds = test_ds.map(lambda examples: tokenizer(examples['text'], padding="max_length", truncation=True), batched=True) We are going to use PyTorch for further modelling and for that we need to set the format of our data compatible with the PyTorch. # campatible with torch from torch.utils.data import DataLoader train_ds.set_format(type="torch", columns=['input_ids', 'attention_mask', 'label']) test_ds.set_format(type="torch", columns=['input_ids', 'attention_mask', 'label']) train_dataloader = DataLoader(train_ds, batch_size=4, shuffle=True) test_dataloader = DataLoader(test_ds, batch_size=4) Next, we will define and train the model. from transformers import PerceiverForSequenceClassification import torch from transformers import AdamW from tqdm.notebook import tqdm from sklearn.metrics import accuracy_score # Define model device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = PerceiverForSequenceClassification.from_pretrained("deepmind/language-perceiver", num_labels=2, id2label=id2label, label2id=label2id) model.to(device) # Train the model optimizer = AdamW(model.parameters(), lr=5e-5) model.train() for epoch in range(20): # loop over the dataset multiple times print("Epoch:", epoch) for batch in tqdm(train_dataloader): # get the inputs; inputs = batch["input_ids"].to(device) attention_mask = batch["attention_mask"].to(device) labels = batch["label"].to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = model(inputs=inputs, attention_mask=attention_mask, labels=labels) loss = outputs.loss loss.backward() optimizer.step() # evaluate predictions = outputs.logits.argmax(-1).cpu().detach().numpy() accuracy = accuracy_score(y_true=batch["label"].numpy(), y_pred=predictions) print(f"Loss: {loss.item()}, Accuracy: {accuracy}") Now, let’s do the inference with the model. text = "I loved this epic movie, the multiverse concept is mind-blowing and a bit confusing." input_ids = tokenizer(text, return_tensors="pt").input_ids # Forward pass outputs = model(inputs=input_ids.to(device)) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print("Predicted:", model.config.id2label[predicted_class_idx]) Output: Final Words Perceiver IO is an architecture that can handle general-purpose inputs and outputs while scaling linearly in both input and output sizes. As we have seen in practice, this architecture produces good results in a wide range of settings. However, we have only seen it for text data, it can also be used for audio, video, and image data as well making it a promising candidate for general-purpose neural network architecture. References Hugging Face DocumentationHugging Face BlogOfficial Colab NotebooksLink for above codes
|
A perceiver is a transformer that can handle non-textual data like images, sounds, and video, as well as spatial data.
|
["Deep Tech"]
|
["Data Science", "Deep Learning", "Hugging Face", "Python", "text classification"]
|
Vijaysinh Lendave
|
2021-12-22T16:00:00
|
2021
| 1,162
|
["Hugging Face", "text classification", "NumPy", "AI", "neural network", "PyTorch", "Transformers", "computer vision", "Python", "Ray", "Colab", "deep learning", "Deep Learning", "Data Science"]
|
["AI", "deep learning", "neural network", "computer vision", "Ray", "PyTorch", "Hugging Face", "Transformers", "Colab", "NumPy"]
|
https://analyticsindiamag.com/deep-tech/hands-on-guide-to-hugging-face-perceiverio-for-text-classification/
| 4
| 10
| 0
| true
| true
| true
|
10,091,321
|
Can You Upload Your Mind and Live Forever?
|
After the tragic death of her partner, Martha comes across a service that lets people stay in touch with the deceased. This is the plot of an episode in the Black Mirror anthology series, which explores the darker side of tech. The episode, called ‘Be Right Back’ was released in 2013, however, a decade later, keeping your loved ones’ consciousness alive even after they are dead, could be a reality with AI. Pratik Desai, the creator of KissanGPT, believes something similar may be possible by the end of this year. “Start regularly recording your parents, elders and loved ones. With enough transcript data, new voice synthesis and video models, there is a 100% chance that they will live with you forever even after leaving their physical body,” he tweeted. The idea is to gather sufficient data to construct a digital avatar in the form of a software, chatbot, or even a humanoid robot that resembles your loved one, allowing them to live on in a sense, forever. However, Desai faced severe backlash for the tweet. Many made comparisons to the Black Mirror episode, which delves into the aftermath of excessive dependence on technology as a means of dealing with mourning and the passing of someone close. A dangerous territory Desai received criticism because death and grief are an inevitable part of life. Although using AI to preserve the consciousness of our loved ones may be achievable, it could ultimately lead to denial and prolong the grieving process, resulting in further negative outcomes. Notably, in the Black Mirror episode, Martha struggles with the ethical and emotional implications of interacting with a virtual version of her deceased partner. Theo Priestley, author of ‘The Future Starts Now’, responding to Desai’s tweet, said, “Not only are they ‘not’ your parents but this is unethical and counter to natural grief… it’s insane. It’s transference at best, at worst, it’s preventing a person from overcoming their grief.” Similarly, a Twitter user said that AI will never be their parents. Instead, these would just be interactive tape-recordings of them. Another user said that it’s absolutely ghoulish to consider your family and friends as data to be immortalised with AI rather than as living, breathing people. Besides, concerns related to AI‘s ability to bring back the dead is that it will most probably be available as a service provided by companies looking to profit from it. “What happens when you can no longer afford the subscription to have your dead relative around to talk to?” Priestley asks. Imagine a scenario where companies use AI-generated versions of our loved ones to sell products. This could give rise to serious concerns around privacy, exploitation, and emotional manipulation. Desai clarifies In a conversation with AIM, Desai mentions that the tweet was taken out of context. To elaborate, he recounts a deeply personal experience that influenced his thought behind the tweet. When Desai, who lives in the US, had a daughter, his grandmother in India was eager to meet her. “We had a girl in our family after a very long time and she was very excited to meet her. I was very close [to her] and we had many things planned. But, she passed away weeks before our scheduled trip to India,” he narrated. Desai was devastated and harbours deep regrets over not having any pictures or recordings of his grandmother to show to his daughter. Clarifying further, Desai said that what he had in mind was a tool that could read bedtime stories to his daughter in her great-grandmother’s voice. Given the technology we have now, creating a tool, with the person’s consent, is very much possible today. However, he never intended to create a Black Mirroresque scenario here, he clarified. Uploading consciousness, a possibility? While Desai said keeping your loved ones alive through AI could be possible in a year or so, we wondered how true his statement was. Surprisingly, or shockingly for some, Desai could be speaking the truth. Last year, 87-year-old Marina Smith MBE managed to speak at her own funeral. Smith, who passed away in June 2022, was able to talk to her loved ones at her own funeral with an AI-powered ‘holographic’ video tool. The conversational video technology was invented by StoryFile, an AI-powered video platform that is cloud-based and automatic, bringing the power of conversational video into everyone’s hands. Similarly, Somnium Space, a metaverse company, founded by Artur Sychov is already offering a similar service called ‘Live Forever’. Sychov wants to create digital avatars of people, that will be accessible by their loved ones, even after their death. “We can take your data and apply AI to it and recreate you as an avatar,” Sychov told Vice. “You will meet the person and you would, maybe for the first 10 minutes of the conversation, not even know that it’s actually AI. That’s the goal,” he said. Interestingly, former Googler Ray Kurzweil spoke publicly about using technology to keep his father’s consciousness alive post his demise, way back in 2010. He spoke about his interest and very recently, Kurzweil also predicted that humans will achieve immortality by 2030. Tools to facilitate this already exist, and the technology is only going to get better with time. Hence, despite the ethical concerns surrounding it, the desire to reconnect with deceased loved ones could be a powerful driver for the demand for these technologies.
|
Despite the ethical concerns surrounding it, the desire to reconnect with deceased loved ones could be a powerful driver for the demand for these technologies.
|
["AI Highlights"]
|
[]
|
Pritam Bordoloi
|
2023-04-13T16:30:00
|
2023
| 896
|
["Go", "AI", "Git", "RAG", "GPT", "Ray", "Aim", "ViT", "R", "llm_models:GPT"]
|
["AI", "Aim", "Ray", "RAG", "R", "Go", "Git", "GPT", "ViT", "llm_models:GPT"]
|
https://analyticsindiamag.com/ai-highlights/can-you-upload-your-mind-and-live-forever/
| 2
| 10
| 0
| false
| true
| false
|
2,285
|
The Rise of Autonomous Cars In India
|
Let’s talk cases first. Google was the first company to launch self-driving cars. Tesla Motors, General Motors and Ford soon followed suit. Uber, the on-demand car player last year announced a $300 million deal with Swedish car maker Volvo to develop fully driverless, or autonomous, cars by 2021. The company acquired Otto, a San Francisco based startup focused on self-driving trucks. Consider the Indian scenario and why India direly needs autonomous vehicles. Even if we leave aside the deadly traffic jams and congested roads, India ranks 3rd in terms of deaths due to road accidents. There is one death every four minutes due to a road accident in India. Moreover, 20 children under the age of 14 die everyday due to road accidents. Experts say, autonomous, IoT enabled cars have the potential to bring down the number of car accidents to a great extent. Human follies have no role to play when driving is done by intelligent machines. The day may not be that far considering – The Indian Internet of Things (IoT) market is set to grow to $15 billion by 2020 from the current $5.6 billion, according to a report by NASSCOM. However, many opine that, from a market perspective, it seems more challenging to have autonomous cars in India, than let’s say Europe or the US. Then there is the legal scenario as well as to which country will draw up a legal framework to make autonomous driving a reality. The potential for autonomous cars in India is surely huge with IT and analytics skills needed to fuel the developments in the direction. India has tons of that. India, a new entrant in connected car segment- Though a new entrant in the connected car segment, India has great potential as it needs connectivity on the go. This connectivity is required for basic aspects such as tracking of, vehicles and providing travelers with customized services. For it to take shape, a strong synergy between auto companies, telecom providers and cloud service providers is required. The scope of growth is huge as it will open up new channels of revenue for everyone involved in the connectivity value chain – be map providers. Web application developers, mobile operators, enterprise application specialists and VAS providers. Mobile technology will further catapult this growth curve. As per TRAI data, India has one billion mobile phones and mobile internet is fast surpassing broadband. For the connected car and autonomous vehicle market to evolve, there is a need of bundling all these offerings into the ecosystem for seamless functioning of bandwidth allocation, storage and content management. In the end, connected vehicles must be productive. Dr Roshy John, a robotics professional had already designed an autonomous vehicle that had been tested on Indian road. He virtually simulated a Tata Nano using algorithms that would suit Indian road conditions. He used laser scanners in place of expensive sensors. He included pedal assistance, 3D simulation and driver psychology. His autonomous model can differentiate between static and dynamic vehicles. This model is not yet commercialized though. Cyber Media Research, a research firm is of the view that it will take another generation to make autonomous vehicle transportation network viable for low automated regions such as India. As per studies, global revenues from “connected cars” the forerunner to fully autonomous or self-driving cars — are growing at an annual rate of 27.5 per cent and are expected to touch $21 billion by 2020. For an autonomous vehicle to be effective, data is the most important factor. The timely collection, processing and sharing of data or information between vehicles and within vehicles would be imperative for it to function smoothly. For autonomous vehicles to be successful, infrastructure, laws, regulations, traffic systems, emergency response systems, manufacturing systems, data and information handling and processing systems needs to undergo swift advancement. Though this is viable, in India, completely removing the human touch is a tough thing to do. There is a large workforce of drivers and mechanics who would need to be placed in other jobs, before we can practically look at such innovations. In autonomous vehicles, safety is also an important concern. Cyber attacks and hacking can cause huge damage to autonomous vehicles. Autonomous vehicle manufacturers should make it a point to come up with strong cyber security measures to safeguard vehicle owners of such attacks. It may be take a decade for mass adoption of driverless vehicles to take place.
|
Let’s talk cases first. Google was the first company to launch self-driving cars. Tesla Motors, General Motors and Ford soon followed suit. Uber, the on-demand car player last year announced a $300 million deal with Swedish car maker Volvo to develop fully driverless, or autonomous, cars by 2021. The company acquired Otto, a San Francisco […]
|
["IT Services"]
|
[]
|
AIM Media House
|
2017-09-01T06:42:25
|
2017
| 738
|
["Go", "AWS", "AI", "RPA", "ML", "innovation", "RAG", "ViT", "analytics", "R"]
|
["AI", "ML", "analytics", "RAG", "AWS", "R", "Go", "ViT", "RPA", "innovation"]
|
https://analyticsindiamag.com/it-services/rise-autonomous-cars-india/
| 4
| 10
| 4
| true
| false
| false
|
10,172,794
|
Indian BFSI Reinvents Risk Detection with AI-Driven Early Warning Systems
|
Non-performing assets (NPAs) have long plagued India’s banking and financial services sector. Traditional methods of credit risk management have often failed to detect early signs of borrower distress. As credit portfolios continue to grow in size and complexity, there’s also a critical need to consider automation and predictive intelligence, similar to many other sectors. AI-driven early warning systems (EWS) have started to transform risk management in the banking, financial services, and insurance (BFSI) sector by automating monitoring and enabling proactive action before defaults occur, benefiting the borrower. AI-Driven Financial Insights and Risk Management In conversation with AIM, Jaya Vaidhyanathan, CEO of BCT Digital, a AI-based risk management company, said, “While traditional systems may flag individual anomalies, AI models excel at connecting the dots across seemingly unrelated data points to detect early signs of credit stress.” Vaidhyanathan added that the data stream remains consistent when it comes to AI. However, an AI-driven EWS introduces a valuable layer of intelligence. This system not only processes large amounts of internal and external data but also identifies intricate patterns that are nearly impossible for humans to detect manually. Tarun Wig, co-founder and CEO of Innefu Labs, told AIM that post-COVID-19, customer financial behaviour has shifted dramatically, marked by digital-first interactions, multiple income streams and new spending patterns. “AI bridges this gap by ingesting real-time, high-frequency data instead of relying solely on static financial statements or past repayment records.” This enables early warning systems to continually learn and update risk profiles, identifying emerging stress signals much sooner than traditional models could. He believes that AI systems can detect early distress signals by utilising market-specific indicators, such as currency fluctuations, geopolitical news, regulatory changes, and sector developments. For example, disruptions in supply chains or industry downturns can be identified through news feeds and social sentiment analysis. By integrating these unconventional data points with core financial information, AI provides a more comprehensive view of creditworthiness, especially in volatile markets. Vaidhyanathan believes that banks are increasingly adopting streaming architectures while acknowledging their complexities. BCT Digital’s rt360 EWS is designed for flexibility, integrating both traditional methods, such as ETL, database links and flat files, and modern approaches, such as application programming interfaces (APIs) and streaming feeds. BCT Digital has developed a Real-Time Monitoring System (RTMS) to enhance low-latency alerting. This system enables near real-time data ingestion through APIs, bots, and streaming pipelines, which is essential for timely alerts, she added. The RTMS includes an expandable alert library for all bank portfolios with customisable thresholds. Moreover, it uses in-memory processing to detect suspicious transactions within milliseconds, facilitating immediate action and low-latency alerts. Encora, a digital product and software engineering provider, believes AI and machine learning are significantly reshaping traditional credit risk models, especially as consumer behaviours shift following COVID-19. Encora also partners with BFSI clients to develop scalable, AI-driven EWS and real-time data pipelines using cloud-native architectures. By leveraging industry-specific AI accelerators, we deliver explainable and regulatory-compliant models that are ready for production, ensuring effective and proactive decision-making, Chinmay Mhaskar, executive vice president at Encora, told AIM. Speaking about a real-time instance of EWS deployment at a prominent public sector bank, Vaidhyanathan said that BCT Digital implemented additional scenarios specifically designed to detect mule accounts, given the rising threat of fraudulent financial activity. Within just three months of rollout, the system successfully identified over 8,000 mule accounts using real-time data patterns and behaviour analysis. These accounts were immediately flagged and frozen in real time, helping the bank prevent potential financial losses and regulatory breaches. Encora uses AI to enhance customer insights and manage strategic risk effectively. Our solutions predict default and renewal risk using machine learning for behavioural modelling and by scoring churn based on policy and payment patterns. Mhaskar highlighted that the company uses natural language processing (NLP) to analyse digital interactions and understand customer behaviour. By integrating credit risk and portfolio management, Encora turns default risks into measurable credit exposure. Its offerings include pre-trained AI models, real-time MLOps pipelines for risk scoring, and a unified view of risk by merging CX/UX data with policy histories, supported by API interoperability between credit and insurance systems. Unstructured Data Struggles The rise of digital banking and neo-banks presents new opportunities, alongside challenges related to data velocity and complexity that traditional systems struggle to manage. Mhaskar pushes for AI-powered EWS to address issues such as unstructured data, fragmented ecosystems, and the need for real-time analytics, while also grappling with limited historical data and persistent concerns about data quality. Encora mitigates these challenges by developing AI-ready data mesh frameworks tailored to the fintech ecosystem and ensuring reliability through end-to-end MLOps orchestration. “We co-develop real-time, AI-ready data mesh frameworks tailored to the fintech ecosystem. Our NLP and behavioural models extract insights from digital signals, such as frustration events or session drop-offs, including pre-built API connectors, thin-file credit scoring templates, and customisable EWS dashboards,” Mhaskar highlighted. Similarly, the rt360-EWS is built to ingest structured, semi-structured, and unstructured data, converting them into a unified format for streamlined processing. Vaidhyanathan said financial institutions function within complex and non-standard IT ecosystems, which exhibit varying levels of data maturity. Therefore, they have implemented a diversified data ingestion strategy tailored to each specific data type and use case. Tackling Other Challenges Wig added that ensuring fairness begins with the curation of diverse data across different geographies and demographics. Regular bias audits and fairness-aware algorithms help identify and reduce discrimination. Moreover, transparent governance and human reviews are essential to prevent automated decisions from disproportionately impacting any community and maintain ethical AI practices. Vaidhyanathan believes that transparency is crucial in regulated environments. BCT Digital’s EWS ensures that stakeholders understand the decision-making process by providing clear explanations for each alert and maintaining a detailed audit trail. “This transparency allows credit officers to understand not just that an alert was raised, but why it was raised—building confidence in the system’s output and enabling better decision-making.” “AI-powered EWS offer transformative potential for risk management, but for traditional financial institutions, adoption comes with real-world complexities. Financial institutions aren’t lacking intent; they’re grappling with deeply entrenched barriers across people, process, and platform,” Mhaskar concluded.
|
“AI models excel at connecting the dots across seemingly unrelated data points to detect early signs of credit stress.”
|
["AI Features"]
|
["BFSI", "early warning systems"]
|
Smruthi Nadig
|
2025-07-03T12:37:43
|
2025
| 1,017
|
["machine learning", "TPU", "AI", "BFSI", "sentiment analysis", "ML", "MLOps", "early warning systems", "RAG", "NLP", "Aim", "analytics"]
|
["AI", "machine learning", "ML", "NLP", "analytics", "MLOps", "Aim", "RAG", "sentiment analysis", "TPU"]
|
https://analyticsindiamag.com/ai-features/indian-bfsi-reinvents-risk-detection-with-ai-driven-early-warning-systems/
| 3
| 10
| 3
| true
| true
| false
|
10,040,142
|
Delhi Traffic? No Problem. Now, AI Will Show You The Way
|
Even with the improvement of public transportation and the growing concern over the environmental impact automobiles pose, cities have not seen a substantial decrease in congestion—if at all. Traffic management remains a vital concern in city planning—especially in Asia, which accounted for 6 of the top 10 cities with the worst traffic in 2020. We are constantly looking for better ways to handle congestion on the road. Now, technology has come to our rescue, with authorities increasingly making use of modern technologies like artificial intelligence, AR, and even blockchain to solve modern-day traffic problems. Smarter systems In 2018, drivers in Delhi spent around 58 percent more time stuck in traffic than drivers in any other city in the world. Finding a solution to Delhi’s growing congestion problem led the Ministry of Home Affairs to permit the Delhi Police to employ a new intelligent traffic management system (ITMS). Such systems use artificial intelligence (AI), machine learning (ML) and data analysis tools and apply them to their existing traffic infrastructure. Delhi’s proposed project uses over 7,500 CCTV cameras, automated traffic lights, and 1,000 LED signs carrying sensors and cameras installed in the city. The Delhi police will then use AI to process these into real-time insights, collect them on a cloud and make real-time decisions on balancing traffic flow, identifying vehicle number plates, and noticing traffic trends. Such systems can help cities plan a more effective way to curb heavy congestion. Moreover, researchers at the Nanyang Technological University (NTU) in Singapore developed an AI-powered intelligent routing algorithm that minimises congestion by simultaneously directing the routes of multiple vehicles. An algorithm of this kind would then suggest alternative routes to users in a way that would keep traffic low. Of course, such systems could also be tricky to implement since they would have to be taught to prioritise emergency vehicles over private ones and display specific routes to cyclists and buses. However, with the advancements in AI and machine learning, an algorithm that can differentiate the types of vehicles and tell vehicles and pedestrians apart does not seem far fetched. Finally, the implementation of ITMS would also have a positive environmental impact. In the United States, the Surtrac intelligent traffic signal control system was put up at 50 intersections (as of 2016) across Pittsburgh. The system curbed travel times by 26 percent, wait times at intersections by 41 percent and vehicle emissions by 21 percent. Thus, more efficient traffic management helps reduce harmful emissions—making AI a champion of both traffic control and greener solutions. A vision for the future Another innovation that has helped promote safe driving practices and manage traffic involves adopting Augmented Reality (AR) into existing systems. Smart car windshields could display essential information—with the help of technologies like AI and IoT systems—such as speed, ETA, possible obstacles ahead, distance and congestion on nearby roads in real-time. Using augmented reality in vehicles could help indicate unsafe driving practices like jumping a red light or going over the speed limit. This could help make driving safer and help manage congestions. That said, the market for AR in cars is new and unclear, with some experts estimating it to reach $14 billion by 2027 and some expecting a value of $673 billion by 2025. Another potential use of technology in traffic management is blockchain-based contracts that could allow improved transaction mechanisms for tolls or even petrol pumps. These could enable secure automated payments and reduce payment time and thus commute time—hence decreasing traffic. Multitudes of new technologies are used to solve problems we have not been able to solve traditionally. However, such technologies involve a great deal of risk, with many people fearing surveillance and privacy issues with increased use of facial recognition technology, CCTV footage or even the possibility of AR and IoT being used to transfer data to local authorities without explicit consent. All are valid concerns. However, with congestions projected to cost nearly $300 billion by 2030, something needs to be done to manage our existing traffic management systems.
|
Even with the improvement of public transportation and the growing concern over the environmental impact automobiles pose, cities have not seen a substantial decrease in congestion—if at all. Traffic management remains a vital concern in city planning—especially in Asia, which accounted for 6 of the top 10 cities with the worst traffic in 2020. We […]
|
["AI Features"]
|
[]
|
Mita Chaturvedi
|
2021-05-16T16:00:00
|
2021
| 669
|
["Go", "machine learning", "artificial intelligence", "programming_languages:R", "AI", "innovation", "ML", "programming_languages:Go", "ViT", "R"]
|
["AI", "artificial intelligence", "machine learning", "ML", "R", "Go", "ViT", "innovation", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-features/delhi-traffic-no-problem-now-ai-will-show-you-the-way/
| 4
| 10
| 1
| false
| false
| true
|
49,102
|
Google Has A Chief Decision Scientist, Who Is She
|
When it comes to data science, there is a lot of confusion considering the job definitions. There are still many companies who don’t even have a clear idea of what kind of data science professionals they need to solve their business problems. While many aspirants and employers are still pondering over designations like data scientist and data engineer, there is one more job title that was even more obscure, but has now gained tremendous traction — the Decision Scientist. What Is Decision Science? Even though decision science is extensively similar to data science and even involves things like analytics, algorithms, machine learning and AI, the job of a decision scientist is more crucial. A data scientist is solely involved in extracting meaningful insights, but a decision scientist is more than that. A decision scientist possesses not only technical and mathematical knowledge but also a strong business knowledge, ability to effectively communicate with different stakeholders, ability to design and simplify poorly defined business problems. Simply put, this tribe of professionals are responsible for making some of the most crucial decisions throughout an organisation by turning data into context-specific, objective insights. Google’s Chief Decision Scientist In 2018, Google appointed Cassie Kozyrkov as the organisations Chief Decision Scientist. Kozyrkov over the years has served in various technical roles at Google, and now this illustrious personality is leading the search giants decision making. Kozyrkov (whose designation is definitely a mission statement) believes that artificial intelligence and big data analytics definitely makes human tasks easier. And at Google, she makes the best out of these techs using data and behavioural science along with human decision-making. The Google chief decision scientist over the years have guided more than 100 projects and designed Google’s analytics program. That is not all, she has personally trained over 20,000 Googlers in statistics, decision-making, and machine learning. Being the head of decision science, Kozyrkov prime mission is to democratise decision intelligence and safe, reliable AI. According to Kozyrkov, AI is just the extension of what humans have been striving for since ages and there’s nothing to be worried about AI would take all the human jobs. “If you can do it better without a tool, why use the tool? And if you’re worried about computers being cognitively better than you, let me remind you that your pen and paper are better than you at remembering things. A bucket is better than any human at holding water, a calculator is better than a human at huge numbers,” said Kozyrkov at AI Summit-2019, London. Demand In India It is not just Google in the decision science arena, companies like Microsoft are Walmart are also on the list that believes that decision science is going to change the way business problems were solved. Bengaluru-based data analytics services company Mu Sigma is also a great example which shows that decision science is going to be a sought-after domain. Back in 2015 the company took a vital step and raised the compensation for the entry-level decision scientists. According to a report, the company’s new compensation structure would include a one-time salary advance of ₹5 lakh for all the entry-level decision scientists who would successfully complete the training offered by Mu Sigma University. With time, every domain is evolving, and data science is no exception. To stay on top of the game, every data science professional would have to upskill according to the transformations. And as decision science has started to come into the picture, soon companies would realise the need for these bunch of professionals.
|
When it comes to data science, there is a lot of confusion considering the job definitions. There are still many companies who don’t even have a clear idea of what kind of data science professionals they need to solve their business problems. While many aspirants and employers are still pondering over designations like data scientist […]
|
["Global Tech"]
|
[]
|
Harshajit Sarmah
|
2019-10-30T18:00:18
|
2019
| 589
|
["big data", "data science", "Go", "machine learning", "artificial intelligence", "AI", "ViT", "analytics", "GAN", "R"]
|
["AI", "artificial intelligence", "machine learning", "data science", "analytics", "R", "Go", "big data", "GAN", "ViT"]
|
https://analyticsindiamag.com/global-tech/data-science-google-decision-scientist/
| 3
| 10
| 1
| false
| true
| false
|
10,141,371
|
Perplexity’s Shopping Assistant is a Killer
|
California-based conversational search engine Perplexity is taking a step ahead with its new upgrade, introducing new shopping features that go beyond traditional search capabilities. CEO Aravind Srinivas believes Perplexity can be India’s AI app. His recent tweet suggested market potential for the search engine in India, which, if implemented, would have interesting vernacular use cases. While Perplexity doesn’t possess its own foundational LLM, the company asserts that it provides substantial value. It currently manages over 100 million queries weekly with the goal of scaling to 100 million queries daily. Srinivas has been bullish about expanding the company’s presence. In India, many founders and developers are already building on existing LLMs creating value at the application layer. ChatGPT Moment for Shopping? This comes after Perplexity launched a new shopping feature for its pro users in the US, where users can research and purchase products. The AI commerce experience, ‘Buy with Pro’, lets users check out pick products from select merchants on the website or app, and place their order. Additionally, a visual search tool called ‘Snap to Shop’ displays relevant products, only requiring shoppers to take a photo of the product they wish to purchase. Srinivas took to X to share how the platform is evolving from a research tool to one that is revolutionising commerce. “I don’t quite think it’s the ChatGPT moment for shopping. But I think the future is looking bright for customers to find and buy what they want much faster without ads and spam. Some more work needs to be done to feel true magic. We’re on it,” he said, highlighting his focus on removing ads and spam for a cleaner, user-friendly shopping experience. With this new feature, Perplexity competes with Google’s lens feature, Amazon’s Rufus assistant, and Walmart’s GenAI recommendation tool, but reportedly stands out by offering direct purchasing through its search engine. This strategy highlights the increasing adoption of generative AI to improve product discovery and streamline e-commerce transactions, and Perplexity is determined to stay ahead in this space. What’s Perplexity’s Moat? The new shopping feature comes amid intensifying competition in the search market, after OpenAI’s recent announcement of its ‘SearchGPT’ integration into ChatGPT. The company has been steadily expanding its capabilities by adding new features like Perplexity Spaces, a finance analysis tool, an internal file search engine, and advanced reasoning mode. Perplexity has introduced innovative ways to streamline information during key events. During NVIDIA’s earnings call yesterday, ‘Perplexity Finance’ offered live transcripts and key highlights, which will soon expand to major stocks. A similar real-time experiment was done for the US elections, where Perplexity partnered with The Associated Press and Democracy Works to create an election information hub that included live updates, real-time vote counts, and personalised ballot details. Srinivas expressed disappointment over traditional news websites lacking proper coverage. Perplexity is often touted as a GPT wrapper. Commenting on the nature of wrappers on the value addition aspect, Srinivas further said, “Wrappers are at all levels; it’s just that they have given you so much value that you do not care.” Amazon chief Jeff Bezos has invested in Perplexity AI, showing his confidence in its potential to innovate AI search. Even NVIDIA CEO Jensen Huang praised the tool, revealing he uses it “almost every day” for its practical benefits. Notably, Meta’s chief AI scientist, Yann LeCun, who strongly advocates ethical and moral AI practices, was involved in the company’s early funding rounds. Even at an event at Carnegie Mellon University, the host used Perplexity AI to create questions for Google CEO Sundar Pichai. On competing with players like Google today, Srinivas said, “Every single query on Perplexity, on average, has 10-11 words. Every query on Google has about two to three words, so users have much higher intent with each query, allowing them to ask more targeted questions.” Expansion Mode On Reports suggest that Perplexity is also preparing to raise funds at a valuation of $9 billion, which would be its fourth round this year. This funding will help Perplexity expand into newer markets and fight its legal battles. In August, Perplexity signed a revenue-sharing deal with publishers like TIME, Der Spiegel, and Fortune after plagiarism allegations. It soon faced lawsuits from News Corp, which owns The Wall Street Journal and New York Post, for copyright violations, and The New York Times for AI scraping, intensifying financial pressures amid growing plagiarism disputes. Srinivas criticised News Corp’s lawsuit, calling it a counterproductive and unnecessary conflict between media and tech, urging collaboration to create innovative tools and expand business opportunities.
|
Gone are the days of traditional shopping as AI takes over.
|
["AI Features"]
|
["AI (Artificial Intelligence)"]
|
Aditi Suresh
|
2024-11-21T17:03:00
|
2024
| 755
|
["Go", "ChatGPT", "GenAI", "OpenAI", "AI", "AWS", "ML", "RAG", "generative AI", "R", "AI (Artificial Intelligence)"]
|
["AI", "ML", "generative AI", "GenAI", "ChatGPT", "OpenAI", "RAG", "AWS", "R", "Go"]
|
https://analyticsindiamag.com/ai-features/perplexitys-shopping-assistant-is-a-killer/
| 2
| 10
| 2
| false
| false
| true
|
14,830
|
Big Data and Analytics is now being used in greening the planet
|
Microsoft is undertaking several projects dedicated to sustainability Microsoft has been making significant contributions in Tech for Good and has taken significant steps towards environment conservation. The company’s going green mantra is underscored by the $1.1 million in 2016, fundraising and 5,949 number of volunteering hours put in by its employees. But it doesn’t stop there. Microsoft’s ecosystem allows the firm, its employees, and the business partners to leverage new technologies for improving sustainability of their companies and communities. The Redmond giant recently tied up with The Nature Conservancy, a nonprofit to extend support for nonprofits globally. At Microsoft, big data is greening the planet Microsoft’s commitment towards nature is deeply rooted in the technologies it utilizes. Microsoft announced a $1 billion commitment to bring cloud computing resources to nonprofit organizations around the world. The firm donates near $2 million every day in products and services to nonprofits as a part of the commitment. Microsoft has extended its support to organizations like World Wildlife Fund, Rocky Mountain Institute, Carbon Disclosure Project, Wildlife Conservation Society, and the U.N. Framework Convention on Climate Change’s (UNFCCC) Climate Neutral Now initiative. Here are a slew of use cases How Prashant Gupta’s initiative is helping farmers in Andhra Pradesh increase revenue? Prashant Gupta works as a Cloud + Enterprise Principal Director at Microsoft. Gupta is undertaking significant developments for environment. Earlier, Gupta had facilitated a partnership for Microsoft with a United Nations agency, the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), and the Andhra Pradesh government. The project involved helping ground nut farmers cope with the drought. Gupta and his team leveraged advanced analytics and machine learning to launch a pilot program with a Personalized Village Advisory Dashboard for 4,000 farmers in 106 villages in Andhra Pradesh. It also included a Sowing App with 175 farmers in one district. Based on weather conditions, soil, and other indicators; the Sowing App advises farmers on the best time to sow. The Personalized Village Advisory Dashboard provides insights about soil health, fertilizer recommendations, and seven-day weather forecasts. Nature Conservancy’s Coastal Resilience program Microsoft’s Azure cloud platform for Nature Conservancy’s Coastal Resilience program: The Coastal Resilience is a public-private partnership led by The Nature Conservancy to help coastal communities address the devastating effects of climate change and natural disasters. The program has trained and helped over 100 communities globally about the uses and applications of the Microsoft’s Natural Solutions Toolkit. The toolkit contains a suite of geospatial tools and web apps for climate adaptation and resilience planning across land and sea environments. This has helped in strategizing for risk reduction, restoration, and resilience to safeguard local habitats, communities, and economies. Puget Sound: Puget Sound’s lowland river valleys is a treasure house, delivering valuable assets, a wealth of natural, agricultural, industrial, recreational, and health benefits to the four million people who live in the region. However, the communities are at increasing risk of flooding issues from rising sea levels, more extreme coastal storms, and more frequent river flooding. High winds hit Puget Sound The Conservancy’s Washington chapter is building a mapping tool as part of the Coastal Resilience toolkit to reduce the flow of polluted stormwater into Puget Sound. Emily Howe, an aquatic ecologist is in charge of the project, which revolves around developing the new Stormwater Infrastructure mapping tool. This tool will be eventually integrated into the Puget Sound Coastal Resilience tool set, that will be hosted on Azure. Furthermore, it will include a high-level heat map of stormwater pollution for the region, combining an overlay of pollution data with human and ecological data for prioritizing areas of concern. Data helps in Watershed Management: Today, around 1.7 billion people living in the world’s largest cities depend on water flowing from watersheds. However, estimates suggest that those sources of watershed will be tapped by up to two-thirds of the global population, by 2050. Kari Vigerstol, The Nature Conservancy’s Global Water Funds Director of Conservation had overseen development of a tool to provide them with better data. The project entailed assisting cities and protecting their local water sources. 4,000 cities were analyzed by “Beyond the Source”. The results stated that natural solutions can improve water quality for four out of five cities. Furthermore, The Natural Solutions Toolkit is being leveraged globally to better understand and protect water resources around the world. Through the water security toolkit, cities will be furnished with a more powerful set of tools. Users can also explore data, and access proven solutions and funding models utilizing the beta version of Protecting Water Atlas. This tool will help in improving water quality and supply for the future. Microsoft is illuminating these places with its innovative array of big data and analytics offerings Emily Howe In Finland, Microsoft partnered with CGI to develop a smarter transit system for the city of Helsinki. This data-driven initiative saw Microsoft utilize the city’s existing warehouse systems to create a cloud-based solution that could collate and analyse travel data. Helsinki’s bus team noticed a significant reduction in fuel costs and consumption, besides realizing increased travel safety, and improved driver performance. Microsoft Research Lab Asia designed a mapping tool, called Urban Air for the markets in China. The tool allows users to see, and even predict, air quality levels across 72 cities in China. The tool furnishes real-time, detailed air quality information, making use of big data and machine learning. Additionally, the tool combines a mobile app, which is used about three million times per day. Microsoft is implementing environmental strategies worldwide. The firm is assisting the city of Chicago in designing new ways to gather data. Additionally, the firm is also helping the city utilize predictive analytics in order to better address water, infrastructure, energy, and transportation challenges. Boston serves as another great instance where Microsoft is working to spread information about the variety of urban farming programs in Boston. Microsoft is also counting on the potential of AI and other technology to increase the impact for the city. Microsoft has also partnered with Athena Intelligence for developing the hill city of San Francisco. As a part of this partnership, Microsoft is leveraging Athena’s data processing and visualization platform to gather valuable data about land, food, water, and energy. This will help in improving local decision-making. Outlook Satya Nadella, CEO of Microsoft Data is not all that matters. At the end, it’s essentially about how cities can be empowered to take action based on that data. Microsoft has comprehensively supported the expansion of The Nature Conservancy’s innovative Natural Solutions Toolkit. The solution suite is already powering on-the-ground and in-the-water projects around the world, besides benefiting coastal communities, residents of the Puget Sound, and others globally. Microsoft is doing an excellent job, delivering on its promise to empower people and organizations globally to thrive in a resource-constrained world. The organization is empowering researchers, scientists and policy specialists at nonprofits by providing them with technology that addresses sustainability.
|
Microsoft has been making significant contributions in Tech for Good and has taken significant steps towards environment conservation. The company’s going green mantra is underscored by the $1.1 million in 2016, fundraising and 5,949 number of volunteering hours put in by its employees. But it doesn’t stop there. Microsoft’s ecosystem allows the firm, its employees, […]
|
["IT Services"]
|
["Azure cloud platform"]
|
Amit Paul Chowdhury
|
2017-05-09T12:26:05
|
2017
| 1,155
|
["Go", "machine learning", "AI", "cloud computing", "Azure", "R", "Azure cloud platform", "RAG", "Ray", "analytics", "predictive analytics"]
|
["AI", "machine learning", "analytics", "Ray", "RAG", "predictive analytics", "cloud computing", "Azure", "R", "Go"]
|
https://analyticsindiamag.com/it-services/big-data-analytics-now-used-greening-planet/
| 3
| 10
| 3
| false
| false
| true
|
10,005,992
|
Visualizations With SandDance Using Visual Studio Code
|
In the past we have seen many visualization tools like PowerBI, Tableau, Salesforce, Splunk, etc. and lots of libraries like matplotlib, plotly, ggplot, bamboolib, etc., but how many of us have seen a code editor helping us with visualizations without having to code? Interesting right? This is possible by using the SandDance extension in Visual Studio Code. SandDance is an extension that helps us visualize our data, drill down by filtering and it can also generate 3d graphs by a single click. Let us see how we can have to get started with SandDance on Visual Studio Code. This article will cover: RequirementsData transformationsExplanation of the datasetLoading dataset and viewing it with SandDanceVisualizations and insights Conclusion Requirements: Visual Studio CodePython extension in Visual Studio CodeSandDance extension in Visual Studio CodeTitanic dataset Data transformations Survived (0 = No, 1 = Yes)PClass (1 = 1st class, 2 = 2nd class, 3 = 3rd class) Explanation of the dataset PassengerId – unique id for every passengerSurvived – did the passenger survive after the accident of not (Yes/No)PClass – information on the passenger class (1st, 2nd or 3rd class)Name – name of the passengersSex – gender of the passengerAge – ageSibSp – number of siblings/spouses aboardParch – number of parents/children aboardTicket number – ticket numberCabin number – cabin numberEmbarked – point of embarkation (C = Cherbourg, Q= Queenstown, S = Southampton) Loading dataset and viewing it with SandDance: File -> Open file… -> Navigate & select titanic datasetOnce the dataset is loaded, right click the dataset file, and look for “View in SandDance”. Visualizations and insights: When you view the dataset using SandDance, this is how it will look like. Before we start with any visualizations, let’s what do all the icons on the page mean. From figure 1 we can understand that there were more men on the ship as compared to women but the thing to notice is the survival ratio of female was higher than men. Let’s dig deeper and see what else can be understood. Figure 1: Column chart for Sex By isolating the female column and dividing it further based on Passenger Class (PClass), from figure 2 we can see that about 50% of the women travelling on 3rd class died whereas most of the women travelling on 1st and 2nd class survived. Figure 2: Column chart for females based on PClass Now let’s add one more layer of detail with the select tab and highlighting females below the age of 18. Figure 3 shows that maximum number of females below the age of 18 were travelling in the 3rd class and their ratio of death & survival is similar. Figure 3: Overview of females below 18 Figure 4 shows us an overview of the passengers who boarded from Cherbourg, Queenstown and Southampton which is a result of faceting the column chart of sex by embarked. From figure 4 we can see that most of the passengers boarded the ship from Southampton and looking closely we could identify that around 75% of the men who boarded Titanic from Southampton, died. On the other hand, all most all men who boarded the ship from Queenstown died in the accident. Figure 4: Faceting column chart of sex based on embarked Figure 5 gives us the information of the passenger class of the people who survived. Let’s take a closer look at each graph separately. The colors tell us the that in the 1st class most of the people embarked from Cherbourg and Southampton on the other hand the 2nd class is crowded by people from Southampton. The third class looks like a mix of people from all 3 locations Figure 5: Column chart of sex faceted by PClass Observing figure 6 we can find some anomalies related to the fare that people have paid to get into different classes. Have a look at the region encircled in the figure. The passenger has paid a very low fare still he got into the first class. If you click on that cell and look up his name on the internet, you’ll come to know that he wasn’t satisfied with his ticket hence the crew upgraded him. Figure 6: Tree map of PClass based on fare Figure 7 shows a 3d graph of people in the first class with the Z-axis as fare paid by people. It’s interesting to note that, on an average people who embarked from Cherbourg have paid more for the first-class ticket in comparison to the others. Figure 7: 3d graph of the fare paid by 1st class people Conclusion EDA is a very crucial part of the data science pipeline and one should always use tools that provide a lot of functionality with less stress on coding. Better & quicker visualizations lead to efficient decision making. One of the major benefits of using SandDance is how easy it is to drill down to a focused view of every graph and the ability to isolate parts of the graphs for further analysis.
|
In the past we have seen many visualization tools like PowerBI, Tableau, Salesforce, Splunk, etc. and lots of libraries like matplotlib, plotly, ggplot, bamboolib, etc., but how many of us have seen a code editor helping us with visualizations without having to code? Interesting right? This is possible by using the SandDance extension in Visual […]
|
["Deep Tech"]
|
["apm data science"]
|
Rithwik Chhugani
|
2020-09-03T18:00:31
|
2020
| 832
|
["data science", "Go", "Plotly", "AI", "ETL", "RAG", "Python", "apm data science", "programming_languages:Python", "Matplotlib", "R"]
|
["AI", "data science", "Matplotlib", "Plotly", "RAG", "Python", "R", "Go", "ETL", "programming_languages:Python"]
|
https://analyticsindiamag.com/deep-tech/visualizations-with-sanddance-using-visual-studio-code/
| 3
| 10
| 0
| true
| false
| false
|
6,542
|
Machine Learning For Better And More Efficient Solar Power Plants
|
Machine learning techniques support better solar power plant forecasting. Machine learning techniques play a crucial role in deciding where to build a plant when accurate or limited location data is available. Machine learning techniques help maintain smart grid stability. The global solar photovoltaic (PV) installed capacity in 2013 was 138.9 GW and it is expected to grow to over 455 GW by 2020. However, solar power plants still have a number of limitations that prevent it from being used on a larger scale. One limitation is that the power generation cannot be fully controlled or planned for in advance since the energy output from solar power plants is variable and prone to fluctuations dependent on the intensity of solar radiation, cloud cover and other factors. Another important limitation is that solar energy is only available during the day and batteries are still not an economically viable storage option making careful management of energy generation necessary. Additionally, as the installed capacity of solar power plants grows and plants are increasingly installed at remote locations where location data is not readily available, it is becoming necessary to determine their optimal sizes, locations and configurations using other methods. Machine learning techniques provide solutions that have been more successful in addressing these challenges than manually developed specialized models. Accurate forecasts of solar power production are a necessary factor in making the renewable energy technology a cost-effective and viable energy source. Machine learning techniques can correctly forecast solar power plant generation at a better rate than current specialized solar forecasting methods. In a study conducted by Sharma et al, multiple regression techniques including least-square support vector machines (SVM) using multiple kernel functions were used in the comparison with other models to develop a site specific prediction model for solar power generation based on weather parameters. Experimental results showed that the SVM model outperformed the others with up to 27 percent more accuracy. Furthermore, machine learning techniques play a crucial role in assisting decision making steps regarding the plants location selection and orientation selection as solar panels need to be faced according to solar irradiation to absorb the optimal energy. Conventional methods for sizing PV plants have generally been used for locations where the required weather data (irradiation, temperature, etc.) and other information concerning the site is readily available. However, these methods cannot be used for sizing PV systems in remote areas where the required data are not available, and thus machine learning techniques are needed to be employed for estimation purposes. In a study conducted by Mellit et al., an artificial neural network (ANN) model was developed for estimating sizing parameters of stand-alone PV systems. In this model, the inputs are the latitude and longitude of the site, while the outputs are two hybrid-sizing parameters. In the proposed model, the relative error with respect to actual data does not exceed 6 percent, thus providing accurate predictions. This model has been evaluated on 16 different sites and experimental results indicated that prediction error ranges from 3.75-5.95 percent with respect to the sizing parameters. Additionally, metaheuristic search algorithms address plan location optimization problems by providing improved local searches under the assumption of a geometric pattern for the field. Lastly, to maintain grid stability, it is necessary to forecast both short term and medium term demand for a power grid with renewable energy sources contributing a considerable proportion of energy supply. The MIRABEL system offers forecasting models which target flexibilities in energy supply and demand, to help manage the production and consumption in the smart grid. The forecasting model combines widely adopted algorithms like SVM and ensemble learners. The forecasting model can also efficiently process new energy measurements to detect changes in the upcoming energy production or consumption. It also employs different models for different time scales in order to better manage the demand and supply depending on the time domain. Ultimately, machine learning techniques support better operations and management of solar power plants.
|
Machine learning techniques support better solar power plant forecasting. Machine learning techniques play a crucial role in deciding where to build a plant when accurate or limited location data is available. Machine learning techniques help maintain smart grid stability. The global solar photovoltaic (PV) installed capacity in 2013 was 138.9 GW and it is expected […]
|
["IT Services"]
|
[]
|
AIM Media House
|
2014-12-11T17:16:27
|
2014
| 656
|
["Go", "TPU", "machine learning", "programming_languages:R", "AI", "neural network", "programming_languages:Go", "Git", "RAG", "R"]
|
["AI", "machine learning", "neural network", "RAG", "TPU", "R", "Go", "Git", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/it-services/machine-learning-better-efficient-solar-power-plants/
| 4
| 10
| 1
| true
| false
| true
|
53,705
|
How Legal Sports Betting Industry Can Win The Gamble With Artificial Intelligence
|
Those days are almost gone when one placed a bet on their favourite team hiding from authorities — sports betting is legal now in many parts of the world. While betting, whether one loves their team or a player or places a bet purely hating the opposing team, one needs to have some knowledge aka in technical terms, some data. The sports betting industry is turning to Artificial Intelligence with this data. Legal sports betting around the world involves working with a lot of information and data collection. Various sports leagues around the globe give bookmakers this data to come up with better products to enhance legal betting. And to improve the field of legal sports betting with massive data, no technology could work better than Artificial Intelligence. Artificial intelligence and machine learning are helping in predicting the patterns in a sport, but, for AI to work significantly well, this sport has to be predictive, and should follow a particular set of rules. For instance, football, which has a specific set of rules with short duration and gets repeatable, can be used for the AI model, where over a lakh of videos of the games were put through the algorithm for one to see patterns that can be predicted by AI. The real effect of the technology is felt when it provides these insights in real-time, which might impact on the significant factors when it comes to betting. What Kind Of Data Is Required? Anybody working with AI and ML knows what an algorithm is — a mathematical formula that organises and evaluates data to solve a complex problem. In legal sports betting the AI makes use of the player statistics and team information in predicting the possible outcome. For example, in a sport like the NBA, stats like field goals; 3-pointers; free throws; the number of rebounds, assists, steals, blocks and turnovers; and game scores from past seasons are used as the data for these algorithms. With advanced analytical tools, AI can revolutionise the way one sees betting. Study of sports betting algorithm and AI is still in the early stages, but companies like Stratagem, Winnerodds, StatsPerform are continuously carrying out research associated with AI and sports betting. Problem With AI Algorithms In Sports Betting Although AI has too much potential when it comes to sports betting. However, gambling and betting are something new for AI, and therefore it does encounter some problems. Humans Are Needed No matter what insights are given by AI and machine learning, human analysis is always required to analyse these insights provided by the system in an ongoing game. Also, because of the unpredictable nature of sports, human instinct plays a huge role in interpreting these data correctly. Problem With Prediction AI always provides insights based on the data fed on to the algorithm, but if an unfortunate and sudden scenario occurs where the star player gets injured but continues to play through it, might result in the team’s losing record. The AI’s algorithm will never take the impact of a star player into account and might mispredict the result for the gamblers. Also, no AI is capable of predicting the momentum shift of the game. Although AI can provide beneficial real-time insights of an ongoing game, it hugely misses the unfortunate shift of game where the losing team becomes the winner at the end. The Starting Lineup One of the crucial factors of betting has always been the starting lineup. If a gambler gets access to these starting lineup information before the game, it can prove to be immensely valuable for betting. And AI obviously doesn’t have these ‘inside connections’ to get the starting lineup, that’s one advantage the bookmakers will always hold over the AI when it comes to sports betting. Outlook Artificial Intelligence has been impacting everything around us and has also gone through several criticisms. But, over the years, it has delivered on most of its promises in varied industries like healthcare, finances etc. With AI starting to impact the sports betting industry, it is promising a legal and more accessible way of betting through the Sportsbooks, which is a billion-dollar software industry allowing users to bet from their mobiles and laptops safely. And with such scenario in hand, soon AI will close the gap that exists between gambling and investing.
|
Those days are almost gone when one placed a bet on their favourite team hiding from authorities — sports betting is legal now in many parts of the world. While betting, whether one loves their team or a player or places a bet purely hating the opposing team, one needs to have some knowledge aka […]
|
["AI Features"]
|
[]
|
Sameer Balaganur
|
2020-01-13T16:20:36
|
2020
| 721
|
["Go", "machine learning", "artificial intelligence", "ELT", "AI", "programming_languages:R", "ML", "programming_languages:Go", "GAN", "R"]
|
["AI", "artificial intelligence", "machine learning", "ML", "R", "Go", "ELT", "GAN", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-features/how-legal-sports-betting-industry-can-win-gamble-with-artificial-intelligence/
| 3
| 10
| 1
| false
| true
| true
|
15,464
|
Online learning gets personal with Great Lakes personalized Business Analytics Certificate Program
|
To state the importance of data and analytics, it’s best to put it this way. It regularly features in the most wanted skillset and with the current IT landscape in a flux, upskilling with a business analytics course has become the safest way to stay relevant in the ever-evolving technology sector. But in the current environment, working professionals have to step away from their roles to dive deep into online analytics courses that lend value and professional development. MOOC – classes for masses lack outcome One of the most popular format of e-learning is MOOC platforms [Massive Open Online Courses], popularized by Coursera and edX, and have a sky high low completion rate and low engagement. MOOCs are self-paced and working professionals lack the motivation to reach the finish line. Even upon successful course completion/participation, there is no accredited diploma. According to an OpenCred study, these [MOOC] certificates are intended to be more a “memento” than a credential. Another study on MOOC trends by Organisation for Economic Co-operation and Development cites one of the most distinct features of MOOC is that it fails to achieve better learning outcomes. For example, the absence of an instructor or other type of support in case of questions on course material is cited as one of the main reasons for students’ dropout. Online course with a difference Bearing this in mind, Great Learning, in association with Great Lakes Institute of Management, a top ranking B-school has launched Business Analytics Certificate Program, a personalized six-month online data analytics certification, custom built by a highly-experienced faculty and industry professionals. One of the most respected B-schools, Great Lakes analytics courses consistently rank in the top 10 by Analytics India Magazine. To understand the USP of personalized analytics education better, AIM spoke to Arjun Nair, Director – Learning Experience at Great Learning. “Our analytics programs have been consistently ranked #1 in the country over the last 3 years. These programs, including BACP, are delivered by our world-class faculty members, who are able to blend academic rigor with industry relevance. Years of research and development has gone into creating a highly impactful curriculum and supporting learning material that will enable you to master these topics and have a delightful learning experience,” Nair emphasized. Is e-learning in tech age successful? To build the workforce of the future, the training ecosystem has to be revamped. While most learning systems have transitioned to a hybrid model, training or simply put, retraining people requires a personalized, nuanced approach, cites the Pew report. Online training modules are self-paced and some learners may not have the interest to continue or complete the course. MOOCs are peddled as nanodegrees and the course content is broken down into short segments. While the content is definitely fast-moving in this space, the micro-courses do not offer a premium learning experience, lack the rigour of assessments and the Certificate of Completions have been questioned for their credibility. A Babson Survey Research Group cites only 29.1 percent of academic leaders “accept the value and legitimacy of online education. In other words, MOOCs haven’t made much headway in adding to the talent pool. Here’s how Great Lakes’s BACP is unlike a regular online course: BACP is a career-oriented program that focuses on teaching business analytics with a deep impact through personalized mentorship from industry experts The program features an exhaustive, in-depth framework and covers all critical aspects of business analytics in a structured learning framework The course is helmed by world class faculty and industry experts and learners will get trained by India’s most celebrated academicians. In fact, two faculty members feature in AIM’s Top 10 Analytics Academicians in India 2017 list The program features micro classes and personalized mentorship that provides an interactive setting and encourages more progress The program takes a hands-on approach to teaching analytics and equips learners with business analytics and modelling skills using Microsoft Excel and R. In addition to practical assignments, each module offers interaction with mentor and industry guest speakers. The course is characterized by project driven learning that enables students to learn how data is used to make business decisions To shed more light on the six-month program, learners are divided into a cohort of five based on their years of work experience or the domain they come from. Personalized mentorship ensures no learner gets left behind and the program objectives are met. Some of the key highlights of BACP are that the course is delivered in a structured learning format, enables personalized learning and learners can build their employability profile under the guidance of a mentor. Moreover, it is a perfect blend of applied learning and analytics training and is geared at graduates, early or mid-career professionals who plan to advance up the job ladder. Hard Facts The course is divided into six modules and covers 150 hours of learning over a period of six months. The curriculum features some of the most widely used tools and techniques in the industry such as advanced statistics, R, Machine Learning and forecasting techniques. The course is backed by six experiential learning projects that aim to strengthen analytical skills in various domains such as Finance, Marketing, Supply Chain, HealthCare, Policy Analysis et al. The idea is that students, working in teams of five, are encouraged to solve real-world data analytics cases under the guidance of a mentor. Concurrently, students can also apply their learnings in a different domain, thereby gaining cross-disciplinary business understanding. Besides a great career support, (career enhancement sessions with industry experts, resume building exercise) students can also tap into Great Lakes alumni network spread across the globe and get insights on how to maximize learning and build a path-breaking career. Another key takeaway from the program is that learners receive personalized education, led by reputed faculty members and can benefit from personalized mentorship that is definitely more impactful. Admissions are open. To apply click here
|
To state the importance of data and analytics, it’s best to put it this way. It regularly features in the most wanted skillset and with the current IT landscape in a flux, upskilling with a business analytics course has become the safest way to stay relevant in the ever-evolving technology sector. But in the current […]
|
["AI Trends"]
|
["Business Analytics"]
|
Richa Bhatia
|
2017-06-06T09:28:03
|
2017
| 982
|
["Go", "machine learning", "programming_languages:R", "AI", "Git", "RAG", "Aim", "analytics", "Business Analytics", "GAN", "R"]
|
["AI", "machine learning", "analytics", "Aim", "RAG", "R", "Go", "Git", "GAN", "programming_languages:R"]
|
https://analyticsindiamag.com/ai-trends/online-learning-gets-personal-great-lakes-personalized-business-analytics-certificate-program/
| 2
| 10
| 3
| true
| true
| true
|
10,116,706
|
HCLTech and CAST Expand Partnership to Offer Customised Chips to OEMs
|
HCLTech, a leading global technology company, and Computer Aided Software Technologies, Inc. (CAST), a semiconductor intellectual property (IP) cores provider, announced plans to scale their partnership to offer customised chips to enable original equipment manufacturers (OEMs) across industries accelerate their digital transformation and automation journeys. HCLTech will enhance design verification, emulation and rapid prototyping of its turnkey system-on-chip (SoC) solutions by leveraging silicon-proven IP cores and controllers from CAST. This will help OEMs in varied industries including automotive, consumer electronics and logistics, to significantly reduce engineering risk and development costs. “CAST shares our vision for innovative, industry-leading electronic systems design. Their high-quality and well-supported IP cores, coupled with HCLTech’s system integration design expertise, will enable us to deliver superior custom chips to our customers worldwide,” said Vijay Guntur, President, Engineering and R&D Services, HCLTech. “Like CAST, HCLTech has a decades-long heritage of delivering superior semiconductor SoC solutions to their customers and partners. We look forward to working together with HCLTech and enhancing the reliability, efficiency and user-friendly nature of semiconductor SoCs,” said Nikos Zervas, CEO at CAST. CAST is a silicon IP provider founded in 1993. CAST’s ASIC and FPGA IP product line includes microcontrollers and processors; compression engines for data, images, and video; interfaces for automotive, aerospace, and other applications; various common peripheral devices; and comprehensive SoC security modules.
|
This will help OEMs in varied industries including automotive, consumer electronics and logistics, to significantly reduce engineering risk and development costs.
|
["AI News"]
|
["HCL Technology"]
|
Pritam Bordoloi
|
2024-03-19T12:50:10
|
2024
| 220
|
["API", "programming_languages:R", "AI", "digital transformation", "Git", "RAG", "automation", "HCL Technology", "R"]
|
["AI", "RAG", "R", "Git", "API", "digital transformation", "automation", "programming_languages:R"]
|
https://analyticsindiamag.com/ai-news-updates/hcltech-and-cast-expand-partnership-to-offer-customised-chips-to-oems/
| 2
| 8
| 1
| false
| false
| false
|
10,004,909
|
The Journey Of Computer Vision To Healthcare Industry
|
Artificial intelligence is becoming a part of every conversation that we have today. One of the important subfields of AI, computer vision has recently exploded in terms of advances and use cases. Akshit Priyesh, who is a data scientist at Capgemini took through an interesting journey of how the research in computer vision has evolved over the years and has now become a prominent part of the healthcare industry. He was addressing the attendees at CVDC 2020, the virtual computer vision developer summit. The Evolution Of Computer Vision Priyesh shared how one of the papers titled ‘Receptive fields of single neurones in the cat’s striate cortex’ by D. H. Hubel and T. N. Wiesel marked the foundation for the development that we see today in computer vision. While experimenting with an anesthetised cat and the response of its neurons to various images that were being displayed, the researchers accidentally discovered that the neurons were activated by looking at the line that showed while changing images on the projector. It was this research that led to the discovery that human brains perceive images as edges, curves and lines. Following this instance, there have been many pieces of research that have established the fact that the visual processing capabilities of humans start with simple structures. One particular research by David Marr titled, ‘Vision: A Computational Investigation into the Human Representation and Processing of Visual Information’ further studied visual perception and established that vision is hierarchical and that it culminates with a description of three-dimensional objects in the surrounding environment. Priyesh said that while it was groundbreaking at that time but it did not explain the mathematics or calculations behind it. Since then, computer vision has come a long way to be used in various fields such as self-driving cars, facial recognition, retail industry and more. Of which, the healthcare industry has recently begun to witness important use cases. Computer Vision In Healthcare Industry One of the emerging AI fields today is computer vision, which can potentially support many different applications delivering life-saving functionalities for patients. Computer vision is today assisting an increasing number of doctors in diagnosing their patients better, monitoring the evolution of diseases, and prescribing the right treatments. It is not just saving time in routine tasks but is being used to train computers to replicate human sight to understand objects in front of it. Priyesh shared that currently, the most widespread use cases for computer vision and healthcare are related to the field of radiology and imaging. AI-powered solutions are finding increasing support among doctors because of their diagnosis of diseases and conditions from various scans such as X-ray, and MR, or CT. It is also being used to measure blood loss during surgery, e.g. during C-section procedures, measure body fat percentage, and more. Some of the use cases in the healthcare industry are: Precise diagnosis: Computer vision has been extensively used to offer a precise diagnosis of diseases such as cancer and minimise instances of false positives. Timely detection of illness: Many fatal diseases such as cancer need to be diagnosed at an early stage to increase the chances of survival of the patient. Computer vision has been extensively used to timely detect these diseases. Heightened medical process: Use of computer vision can considerably reduce the time that doctors usually take in analysing reports and images. Medical imaging: Computer vision-enabled medical imaging has become quite popular over the years and has proved to be trust-worthy to detect diseases.Health monitoring: It has also been used by doctors to analyse health and fitness metrics of patients to make faster and better medical decisions.Nuclear medicine: A part of clinical medicine, nuclear medicine deals with the use of radionuclide pharmaceuticals in diagnosis. Computer vision has been explored in this field too. Priyesh shared that in the current times of the COVID pandemic, computer vision is being used to detect the disease and explore potential treatment for the deadly virus. He along with his team at Capgemini have even developed a chatbot that detects COVID-positive patients. Based on the user inputs, it detects the probability of being infected — using computer vision.
|
Artificial intelligence is becoming a part of every conversation that we have today. One of the important subfields of AI, computer vision has recently exploded in terms of advances and use cases. Akshit Priyesh, who is a data scientist at Capgemini took through an interesting journey of how the research in computer vision has evolved […]
|
["AI Features"]
|
["Computer Vision"]
|
Srishti Deoras
|
2020-08-15T16:00:07
|
2020
| 689
|
["Replicate", "artificial intelligence", "programming_languages:R", "AI", "computer vision", "Ray", "llm_models:Gemini", "Computer Vision", "Rust", "R", "programming_languages:Rust"]
|
["AI", "artificial intelligence", "computer vision", "Ray", "R", "Rust", "Replicate", "llm_models:Gemini", "programming_languages:R", "programming_languages:Rust"]
|
https://analyticsindiamag.com/ai-features/the-journey-of-computer-vision-to-healthcare-industry/
| 2
| 10
| 1
| true
| false
| true
|
17,693
|
Asset Management Is Being Completely Disrupted By Data Science. Here’s How.
|
The financial services industry has always been working with large volumes of data and when it comes to asset management, the data volume increases multi-fold. The last decade has witnessed massive growth in the financial services industry in terms of data analytics technologies. While the early algorithms used structured data only, modern machine learning based solutions can yield insights even from highly unstructured records. Moreover, sentiment analysis and image recognition have now been employed to assume potential peaks and valleys in the stock market. For example, collecting and analysing social media trends around brands helps the trader foresee whether a company’s stock prices will rise or fall. Despite the changing trend, traditional wealth management companies continue to remain late adopters of the technologies and are still seeking ways to become data-driven. Here are the main operations that can be enhanced with a data-driven approach. Data-driven asset management: 1. Smart advisors (or robo-advisors): These advisors have been around for almost a decade and have now become the hottest personalisation trend in the financial management industry. The algorithms consider various customer data – risk tolerance, behaviour, legal benchmarks, preferences – and make recommendations based on this data. By combining multiple data sources, one can increase the dimensionality of models and solve complex optimisation problems that account for hundreds of individual portfolio factors. This allows portfolio managers to suggest tailored investment plans to clients in both B2B and B2C operations. 2. Fraud detection powered by neural networks: Another emerging trend in financial management are anti-money laundering and fraud-detection models that are powered by neural networks and help in identifying any suspicious activities. The system is trained and developed in a way that it can track and assess the behaviour of all the individuals involved in the process. The systems use and apply deep neural networks to detect any fraud by analysing both structured and unstructured data that include all kinds of online footprints. The strong neural networks efficiently detect any implicit link between the customer and any potential fraud. 3. Predictive analytics: Predictive analytics uses historical data to determine the relationships of data with outputs and build models to check against current data. Stocks, bonds, futures, options, and rates movements form the stream of billions of deal records every day, which make for non-stationary time series data. These often become complex problems for financial analysts because conventional statistical methods fall short both in terms of prediction accuracy and speed. There are three approaches to combat these data. Machine learning methods: Models are trained on short-term historical data and yield predictions based on it. Stream learning: A predictive model is continuously updated by every new inbound record, which provides better accuracy. Ensemble models: Multiple machine learning models analyse incoming data, and the predictions are based on consolidated forecasting results. 4. Scenario-based analytics: The method lets financial managers to analyse possible future events by considering alternative possible outcomes. Instead of showing just one exact picture, it presents several alternative future developments. Computing power and new data processing packages have made building stress models for company operations and stock market performance possible. With this method, one can test millions of scenarios accounting for hundreds of unique market conditions. Why must asset managers start adopting technology? There has been much talk about money managers being slow in adopting technology for asset management. Upgrading to digitisation will weaken the risk of these players to lose market share to the digitally savvy businesses that are aiming to disrupt the investment industry. According to a poll conducted by Create Research, out of the 458 asset and wealth managers, only 27 per cent of wealth managers offer robo-advisers, and only 31 per cent use big data. The asset management industry’s need to modernise comes as it is grappling with pressures ranging from tougher regulation to stronger competition. The other reason that should also be considered for making the shift are the millennials. They are not just digitally savvy but are also potentially rich. Just to give a sense, the millennials will soon make for the largest part of the workforce and also stand a strong chance to inherit ancestral wealth, which could approximately be $15tn in the U.S. and $12tn in Europe over the next 15-20 years, Create Research said. With all that money and digital savviness, financial advisors should equip themselves to stand a chance in the growing competition. Conclusion: Adopting data science solutions for wealth management is not new in the financial market. However, wealth management organisations have continued to be the late adopters of data-driven technologies. Yet, there is no denial to the fact that industry leaders have been the first ones to adopt these technologies and have set a benchmark for the others to meet. The data science technologies for wealth management is the next big thing into the field. These technologies have the capability to intensify the interest in semantic analysis, ML-based time series forecasting, and even scenario-based modelling. Due to a fairly late transformation as compared to the financial services industry in general, the smart move today will be to seek a partnership among tech consultancies and fin-tech start-ups to avoid reinventing the wheel.
|
The financial services industry has always been working with large volumes of data and when it comes to asset management, the data volume increases multi-fold. The last decade has witnessed massive growth in the financial services industry in terms of data analytics technologies. While the early algorithms used structured data only, modern machine learning based […]
|
["IT Services"]
|
["digitisation"]
|
Priya Singh
|
2017-09-13T09:03:18
|
2017
| 859
|
["data science", "machine learning", "AI", "neural network", "digitisation", "ML", "sentiment analysis", "Aim", "analytics", "predictive analytics", "fraud detection"]
|
["AI", "machine learning", "ML", "neural network", "data science", "analytics", "Aim", "predictive analytics", "fraud detection", "sentiment analysis"]
|
https://analyticsindiamag.com/it-services/asset-management-completely-disrupted-data-science-heres/
| 3
| 10
| 3
| false
| true
| true
|
67,872
|
Hands-On Guide to Predict Fake News Using Logistic Regression, SVM and Naive Bayes Methods
|
There are more than millions of news contents published on the internet every day. If we include the tweets from twitter, then this figure will be increased in multiples. Nowadays, the internet is becoming the biggest source of spreading fake news. A mechanism is required to identify fake news published on the internet so that the readers can be warned accordingly. Some researchers have proposed the methods to identify fake news by analyzing the text data of the news based on the machine learning techniques. Here, we will also discuss the machine learning techniques that can identify fake news correctly. In this article, we will train the machine learning classifiers to predict whether given news is real news or fake news. For this task, we will train three popular classification algorithms – Logistics Regression, Support Vector Classifier and the Naive-Bayes to predict the fake news. After evaluating the performance of all three algorithms, we will conclude which among these three is the best in the task. The Data Set The dataset used in this article is taken from Kaggle that is publically available as the Fake and real news dataset. This data set has two CSV files containing true and fake news. Each having Title, text, subject and date attributes. There are 21417 true news data and 23481 fake news data given in the true and fake CSV files respectively. To train the model for classification, we will add a column in both the datasets as real or fake. First, we will import all the required libraries. #Importing Libraries import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.metrics import accuracy_score, confusion_matrix,classification_report from sklearn.linear_model import LogisticRegression from sklearn.svm import LinearSVC from sklearn.naive_bayes import MultinomialNB After importing the libraries, we will read the CSV files in the program. #Reading CSV files true = pd.read_csv("True.csv") fake = pd.read_csv("Fake.csv") Here, we will add fake and true labels as the target attribute with both the datasets and create our main data set that combines both fake and real datasets. #Specifying fake and realfake['target'] = 'fake'real['target'] = 'true'#News datasetnews = pd.concat([fake, true]).reset_index(drop = True)news.head() After specifying the main dataset, we will define the train and test data set by splitting the main data set. We have kept 20% of the data for testing the classifiers. This can be adjusted accordingly. #Train-test split x_train,x_test,y_train,y_test = train_test_split(news['text'], news.target, test_size=0.2, random_state=1) In the next step, we will classify the news texts as fake or true using classification algorithms. We will perform this classification using three algorithms one by one. First, we will obtain the term frequencies and count vectorizer that will be included as input attributes for the classification model and the target attribute that we have defined above will work as the output attribute. To bind the count vectorizer, TF-IDF and classification model together, the concept of the pipeline is used. A machine learning pipeline is used to help automate machine learning workflows. They operate by enabling a sequence of data to be transformed and correlated together in a model that can be tested and evaluated to achieve an outcome, whether positive or negative. In the first step, we will classify the news text using the Logistic Regression model and evaluate its performance using evaluation matrices. #Logistic regression classificationpipe1 = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('model', LogisticRegression())])model_lr = pipe1.fit(x_train, y_train)lr_pred = model_lr.predict(x_test)print("Accuracy of Logistic Regression Classifier: {}%".format(round(accuracy_score(y_test, lr_pred)*100,2)))print("\nConfusion Matrix of Logistic Regression Classifier:\n")print(confusion_matrix(y_test, lr_pred))print("\nCLassification Report of Logistic Regression Classifier:\n")print(classification_report(y_test, lr_pred)) After performing the classification using the logistic regression model, we will classify the news text using the Support Vector Classifier model and evaluate its performance using evaluation matrices. #Support Vector classificationpipe2 = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('model', LinearSVC())])model_svc = pipe2.fit(x_train, y_train)svc_pred = model_svc.predict(x_test)print("Accuracy of SVM Classifier: {}%".format(round(accuracy_score(y_test, svc_pred)*100,2)))print("\nConfusion Matrix of SVM Classifier:\n")print(confusion_matrix(y_test, svc_pred))print("\nClassification Report of SVM Classifier:\n")print(classification_report(y_test, svc_pred)) Finally, we will classify the news text using the Naive Bayes Classifier model and evaluate its performance using evaluation matrices. #Naive-Bayes classificationpipe3 = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('model', MultinomialNB())])model_nb = pipe3.fit(x_train, y_train)nb_pred = model_nb.predict(x_test)print("Accuracy of Naive Bayes Classifier: {}%".format(round(accuracy_score(y_test, nb_pred)*100,2)))print("\nConfusion Matrix of Naive Bayes Classifier:\n")print(confusion_matrix(y_test, nb_pred))print("\nClassification Report of Naive Bayes Classifier:\n")print(classification_report(y_test, nb_pred)) As we can analyze from the accuracy scores, confusion matrices and the classification reports of all the three models, we can conclude that that the Support Vector Classifier has outperformed the Logistic Regression model and the Multinomial Naive-Bayes model in this task. The Support Vector classifier has given about 100% accuracy in classifying the fake news texts. We can see a snapshot of the predicted labels for the news texts by support vector classifier in the below image.
|
In this article, we will train the machine learning classifiers to predict whether given news is real news or fake news. For this task, we will train three popular classification algorithms – Logistics Regression, Support Vector Classifier and the Naive-Bayes to predict the fake news. After evaluating the performance of all three algorithms, we will conclude which among these three is the best in the task.
|
["Deep Tech"]
|
["Classification", "logistic regression", "Naive Bayes classifier", "Support Vector Machine"]
|
Dr. Vaibhav Kumar
|
2020-06-22T15:00:00
|
2020
| 776
|
["Go", "Classification", "NumPy", "machine learning", "TPU", "programming_languages:R", "AI", "data_tools:Pandas", "Naive Bayes classifier", "logistic regression", "Support Vector Machine", "programming_languages:Go", "R", "Pandas"]
|
["AI", "machine learning", "Pandas", "NumPy", "TPU", "R", "Go", "programming_languages:R", "programming_languages:Go", "data_tools:Pandas"]
|
https://analyticsindiamag.com/deep-tech/hands-on-guide-to-predict-fake-news-using-logistic-regression-svm-and-naive-bayes-methods/
| 4
| 10
| 0
| true
| false
| false
|
17,209
|
Analytics India Companies Study 2017
|
Each year we come out with our study of Analytics firms in India. The goal is to put numbers into the scale and depth of how various organizations around analytics and related technologies have surfaced in recent years. Here’s our annual study for this year. Read Analytics India Companies Study 2016 Read Analytics India Companies Study 2015 Read Analytics India Companies Study 2013 Read Analytics India Companies Study 2012 Key Trends Last year has since the biggest jump in the number of companies in India working on Analytics in some shape and form. More than 5,000 companies in India claim to provide analytics as an offering to their customers. This includes a small number of companies into products and a larger chunk offering either offshore, recruitment and training services. There is growth rate of almost 100% year over year in the number of analytics companies in India from last year. Moreover, the number of analytics companies in India are still very few in number, compared to the strength of analytics companies around the globe. In fact, India accounts for just 7% of global analytics companies. This is down from 9% last year. Company Size On an average, Indian Analytics companies have 179 employees on their payroll. It is an increase from an average of 160 employees since last year. On a global scale, this is quite a good number, as analytics companies across the world employ an average of 132 employees Almost 77% of analytics companies in India have less than 50 employees compared to 86% on a global level. Cities Trend Delhi/ NCR trumps Bangalore to house the most number of analytics firms in India this year, at almost 28%. It is followed by Bangalore at 25% and Mumbai at 16% analytics companies. Hyderabad, Chennai and Pune are far behind with their percentages of analytics companies in single digits as reflected in the graphs above. However, these numbers seem to have not changed much since last year.
|
Each year we come out with our study of Analytics firms in India. The goal is to put numbers into the scale and depth of how various organizations around analytics and related technologies have surfaced in recent years. Here’s our annual study for this year. Read Analytics India Companies Study 2016 Read Analytics India Companies […]
|
["AI Features"]
|
[]
|
Дарья
|
2017-08-24T09:56:06
|
2017
| 328
|
["Go", "programming_languages:R", "AI", "programming_languages:Go", "Git", "RAG", "Aim", "analytics", "GAN", "R"]
|
["AI", "analytics", "Aim", "RAG", "R", "Go", "Git", "GAN", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-features/analytics-india-companies-study-2017/
| 3
| 10
| 0
| false
| false
| false
|
51,707
|
Baidu Goes On A Patent Frenzy; Applies For ML-Based Audio Synthesis Ownership
|
Baidu has come out on top as the leading artificial intelligence patent application leader eclipsing the likes of Tencent and Huawei. Reportedly, Baidu is also leading in the highly competitive area of intelligent driving, with 1,237 patent applications. After years of research, Baidu has developed a comprehensive AI ecosystem and is now at the forefront of the global AI industry. “Baidu retained the top spot for AI patent applications in China because of our continuous research and investment in developing AI, as well as our strategic focus on patents.” -Victor Liang, VP, Baidu via Baidu Baidu’s patents cover a wide variety of domains that include Deep learning (1,429 patents)NLP (938 patents) Speech recognition (933 patents) While Baidu topped the charts in China, its R&D centre in the US also had applied for patents in the US patent office. Especially in the speech recognition domain, Baidu has its eyes locked on audio synthesis using CNN. In this patent US20190355347A1 for a computer-implemented method for training a neural network model for spectrogram inversion comprising titled ‘Spectrogram to waveform synthesis using convolutional networks’, it lists the following: Inputting an input spectrogram comprising a number of frequency channels into a convolution neural networks (CNNs)Outputting from the CNN a synthesised waveform for the input spectrogram, the input spectrogram having a corresponding ground truth waveformUsing the corresponding ground truth waveform, the synthesised waveform, and a loss function comprising at least one or more loss components selected from spectral convergence lossUsing the loss to update the CNN There is a clear mention of using the convolutional neural networks and since CNNs is the lifeblood of many modern-day ML applications, any claim even on a minor part can hurt in the long run. Perils Of A Patent Race The year 2019 witnessed a sudden growth of interest to own algorithms. So far, Google got a bad rap for going after batch normalisation, a widely used technique in deep learning. Even if the intentions are to safeguard the research from falling to pseudo players, this whole ordeal is a slippery slope where the owners can leverage the smaller firms using advanced technology. In the case of Baidu, too, there is a danger of losing ownership to many audio processing applications. Baidu is a Chinese company who has contributed to the growing fears amongst the ML community. Its AI vision was fortified with projects like Apollo, an open source autonomous driving platform along with other intelligent driving innovations. China has allegedly been involved in many intellectual property thefts from US companies. So, when Baidu’s foreign division, applies for a patent, one cannot help but think about the consequences of handing over the ownership to China continued to be the world’s leading source of counterfeit goods, reflecting its failure to take decisive action to curb the widespread manufacture, domestic scale, and export of counterfeit goods. According to a 2019 United States Trade report, China continues to be the world’s leading source of counterfeit goods responsible for widespread manufacture and export of counterfeit goods. https://twitter.com/PoliticalShort/status/1202682289408348160 The most important thing any country can do in the current era is protect its trade secrets. Even though Google has also been accused of patent race, adding China to the Baidu equations changes everything. The Chinese have a system that encourages transaction of intellectual property allowing every common guy to access cutting edge technology. How big does one get is a whole new argument. However, the widespread opening of overseas branches of the US companies in China, made IP transfer a bit reckless. This no doubt would have come as a shocker for the owners as there is a danger for new competitors to sprout using the stolen technology leading to billions dollars loss. This is a serious issue since China has been consistently notorious overseas when it comes to IP theft. Here are a few that got the spotlight as listed by Jeff Ferry, CPA Research Director: In 2004 Cisco Systems, took Huawei to court for stealing its core router software code and using it in Huawei routers. Huawei routers, widely used in China and Europe, have played a key role in Huawei’s growth into a $95 billion global telecom equipment giantIn 2011, AMSC filed the largest-ever IP theft case in a Chinese court, seeking $1.2 billion in compensation for their losses. AMSC partnered with a Chinese maker of the wind turbine hardware, Sinovel, to sell into the Chinese market. AMSC sales rose rapidly into the hundreds of millions of dollars. In 2011, AMSC discovered that Sinovel had an illegal copy of the entire AMSC software code on one of their windmillsIn 2015, the federal government charged six Chinese citizens with stealing wireless communications technology from two Silicon Valley microchip makers, Avago and Skyworks, and launching their own company to sell that technology in China. Apart from this, Huawei has also been accused of stealing the patented smartphone camera technologies a couple of months ago. The biggest concern for the developers regarding patenting can be distilled down to two words — infinite leverage. They fear that the aspirants will either be squeezed midway or get discouraged altogether from accessing state of the art technology, which again would lead to outcomes like the much dreaded AI winter.
|
Baidu has come out on top as the leading artificial intelligence patent application leader eclipsing the likes of Tencent and Huawei. Reportedly, Baidu is also leading in the highly competitive area of intelligent driving, with 1,237 patent applications. After years of research, Baidu has developed a comprehensive AI ecosystem and is now at the forefront […]
|
["AI Trends"]
|
["Baidu", "CNNs", "Machine Learning", "Patent"]
|
Ram Sagar
|
2019-12-11T19:00:13
|
2019
| 871
|
["artificial intelligence", "TPU", "AI", "neural network", "ML", "Machine Learning", "Patent", "RAG", "NLP", "Aim", "deep learning", "Baidu", "CNNs", "R"]
|
["AI", "artificial intelligence", "ML", "deep learning", "neural network", "NLP", "Aim", "RAG", "TPU", "R"]
|
https://analyticsindiamag.com/ai-trends/baidu-patent-china-machine-learning-united-states-ip-theft/
| 4
| 10
| 2
| true
| false
| false
|
10,103,431
|
Microsoft Doesn’t Really Need OpenAI, it Wants AGI
|
Striking at just the right moment, Satya Nadella, chairman and CEO at Microsoft, swiftly onboard Sam Altman at Microsoft. Altman will be joined by former president of OpenAI Greg Brockman and a few other researchers from the company in dire straits, notably Jakub Pachocki, the person leading GPT-4. Absorbing Altman and his team into Microsoft could possibly be the biggest bet Nadella has made in this nearly decade-long stint as the CEO of Microsoft, bigger than its billions of dollars of investment in OpenAI. However, with the way things are moving, by the time we publish this article, Altman might return to OpenAI, rendering our arguments irrelevant. Reports suggest that despite announcements, Altman and Microsoft is not a done deal. Nadella, in a recent interview with Bloomberg, stated that he will continue to support him and his team irrespective of where Altman is. However, it would make more sense for Microsoft to have Altman and the team at Microsoft rather than OpenAI. The startup’s faith currently remains undecided, even though they have a new CEO in Emmett Shear. Nadella, on the other hand, would want to get as much as OpenAI folks to join this new AI group led by Altman at Microsoft. The end-game OpenAI, which started off in 2015 as a non-profit, is focussed on achieving artificial general intelligence (AGI). As stated in one of their blogs, their mission has been “to build AGI that is safe and benefits all of humanity”. However, interestingly, according to OpenAI, Microsoft will not have exclusive rights to use OpenAI’s post-AGI model. Due to their USD 13 billion dollar investment in the company, currently, Microsoft has exclusive rights to use its models like GPT-4 and GPT-4 turbo. ( Source: OpenAI blog) Once AGI emerges, whether in the form of GPT-5, GPT-6, or an entirely new model, Microsoft will not possess exclusive rights to utilise that technology. Given Microsoft’s corporate nature driven by financial interests, it would want exclusive access to the technology and seek opportunities for monetisation regardless of its origin. “Reality is that an in-house lab led by Sam and Greg might be better for Microsoft than the existing arrangement given the AGI clause,” Gavin Baker, managing partner & CIO at Atreides Management, said in an X post. Even if Microsoft successfully acquires this cutting-edge technology from OpenAI, the blog goes on to clarify that in a for-profit structure, there would be equity caps. These limits are designed to prioritise a balance between commercial objectives and considerations of safety and sustainability, rather than solely pursuing profit maximisation. Achieve AGI at Microsoft Nevertheless, if Altman and his top team collaborate at Microsoft within a carefully selected group, there’s a potential scenario where Altman could achieve AGI at Microsoft rather than at OpenAI. This scenario would grant Microsoft exclusive access to this technology, providing it with the opportunity to maximise its monetisation—an unsettling but plausible prospect. This could be another reason Nadella was quick to get Altman and Brockman on board at Microsoft as soon as negotiations with the OpenAI board of directors faltered. After all, it was Altman who started the generative AI explosion by releasing ChatGPT to the world nearly a year ago. Come achieve AGI at Microsoft might possibly be the exact words Nadella expressed when he tabled the offer to Altman. So far, besides Altman, Brockman, and Pachocki, Aleksander Madry, and Szymon Sidor, all previously working for OpenAI, have agreed to join Altman’s new AI group. https://twitter.com/marktenenholtz/status/1726585324271481332 Appearing optimistic, Brockman also announced on X (previously Twitter) that they are going to build something new and it will be incredible. So far, not much is known about this newly formed group, which Altman will lead, besides that it will be a new advanced research team (possibly mission AGI). But it would be interesting to see how much of their work aligns with OpenAI’s. Microsoft does not really need OpenAI ‘OpenAI is nothing without its people,’ almost all OpenAI employees tweeted yesterday in a synchronised manner resembling a coordinated X campaign, expressing solidarity with those who departed from the company. Moreover, nearly all of them have threatened to resign. Given the turmoil, many other companies working in AI are reportedly trying to poach OpenAI employees. Salesforce CEO Marc Benioff also posted on X, “Salesforce will match any OpenAI researcher who has tendered their resignation full cash & equity OTE to immediately join our Salesforce Einstein Trusted AI research team.” “That talent is the crown jewel of the organisation,” Tammy Madsen, professor of management in the Leavey School of Business at Santa Clara University told TechCrunch. Given that Atlman is already on board, Microsoft would want to get more talents onboard from OpenAI and continue their pursuit of AGI at Microsoft. Brockman also declared on X that more will follow suit. This remains a likely scenario, however, these are uncertain times, and we will have to see how the whole situation pans out. But, Nadella, so far, has said that Microsoft remains committed to its partnership with OpenAI. “We look forward to getting to know Emmett Shear and OpenAI’s new leadership team and working with them,” he posted on X. Currently, Microsoft is banking heavily on OpenAI’s models such as GPT-4 and will continue to need them until Altman’s AI group comes up with newer and better models. Moreover, the intricate nature of the clauses of the deal between Microsoft and OpenAI is not public yet. Interestingly, Altman’s new AI team could be working on exactly the same thing as OpenAI is, and a future scenario, where Altman’s team has achieved AGI, then Microsoft, may not possibly need OpenAI anymore. Furthermore, the duration of Microsoft’s ongoing financial support for OpenAI and potential shifts in strategy amid significant reshuffling pose intriguing uncertainties. The dynamics could again shift significantly, especially if the board at OpenAI resigns and Altman is reinstated as the CEO, altering the entire landscape of these arguments.
|
According to OpenAI, Microsoft will not have exclusive rights to use OpenAI’s post-AGI model.
|
["Global Tech"]
|
["Greg Brockman", "Sam Altman"]
|
Pritam Bordoloi
|
2023-11-21T15:18:19
|
2023
| 986
|
["Go", "ChatGPT", "Sam Altman", "GPT-5", "AI", "OpenAI", "Greg Brockman", "GPT", "generative AI", "Rust", "GAN", "R"]
|
["AI", "generative AI", "GPT-5", "ChatGPT", "OpenAI", "R", "Go", "Rust", "GPT", "GAN"]
|
https://analyticsindiamag.com/global-tech/microsoft-doesnt-really-need-openai-it-wants-agi/
| 2
| 10
| 3
| false
| false
| false
|
10,119,210
|
Financial Times Enters into a Content Licensing Agreement with OpenAI
|
The Financial Times has entered into an agreement with OpenAI to license its content so that the AI startup can build new AI tools. According to a press release from FT, users of ChatGPT will see summaries, quotes, and direct links to FT articles. Any query yielding information from the FT will be clearly credited to the publication. The FT, which is already a user of OpenAI’s products, specifically the ChatGPT Enterprise, recently introduced a beta version of a generative AI search tool called “Ask FT.” This feature, powered by Anthropic’s Claude LLM, enables subscribers to search for information across the publication’s articles. “Apart from the benefits to the FT, there are broader implications for the industry. It’s right, of course, that AI platforms pay publishers for the use of their material,” said FT chief executive John Ridding. “At the same time, it’s clearly in the interests of users that these products contain reliable sources,” he added. This marks OpenAI’s fifth agreement within the past year, adding to a series of similar deals with prominent news organizations such as the US-based Associated Press, Germany’s Axel Springer, France’s Le Monde, and Spain’s Prisa Media. In December, The New York Times became the first major US media organization to file a lawsuit against OpenAI and Microsoft, alleging that these tech giants utilized millions of articles without proper licensing to develop the underlying models of ChatGPT.
|
Agreement comes as OpenAI seeks data from reliable sources to train latest AI models.
|
["AI News"]
|
["ChatGPT", "Microsoft", "OpenAI"]
|
Sukriti Gupta
|
2024-04-29T16:12:05
|
2024
| 233
|
["Anthropic", "ChatGPT", "OpenAI", "AI", "AWS", "GPT", "generative AI", "GAN", "R", "Microsoft", "startup"]
|
["AI", "generative AI", "ChatGPT", "OpenAI", "Anthropic", "AWS", "R", "GPT", "GAN", "startup"]
|
https://analyticsindiamag.com/ai-news-updates/financial-times-enters-into-a-content-licensing-agreement-with-openai/
| 2
| 10
| 3
| false
| false
| false
|
10,049,972
|
Google Upgrades Translatotron, Its Speech-to-Speech Translation Model
|
Google AI has introduced the second version of Translatotron, their S2ST model that can directly translate speech between two different languages without the need for many intermediary subsystems. Automatically generated S2ST systems are made up of speech recognition, machine translation, and speech synthesis subsystems. Given this, the cascade systems suffer the challenge of potential longer latency, loss of information, and compounding errors between subsystems. To this, Google released Translatotron in 2019, an end-to-end speech-to-speech translation model that the tech giant claimed was the first end-to-end framework to directly translate speech from one language into speech in another language. The single sequence-to-sequence model system was used to create synthesised translations of voices to ensure the sound of the original speaker was intact. But despite its ability to automatically produce human-like speech, it underperformed compared to a strong baseline cascade S2ST system. Translatotron 2 In response, Google introduced ‘Translatotron 2’, an updated model version with improved performance and a new method for transferring the voice to the translated speech. In addition, Google claims the revised version can successfully transfer voice even when the input speech consists of multiple speakers. Tests confirmed this on three corpora that validated that Translatotron 2 outperforms the original Translatotron significantly on translation quality, speech naturalness, and speech robustness. The model is also better aligning with AI principles and secure, preventing potential misuse. For example, in response to deep fakes being created with the Translatotron, Google’s paper states, “The trained model is restricted to retain the source speaker’s voice, and unlike the original Translatotron, it is not able to generate speech in a different speaker’s voice, making the model more robust for production deployment, by mitigating potential misuse for creating spoofing audio artefacts.” Architecture Main components of Translatotron 2: A speech encoderA target phoneme decoderA target speech synthesiserAn attention module – connecting all the components The architecture follows that of a direct speech to text translation model with the encoder, the attention module and the decoder. In addition, here, the synthesiser is conditioned on the output generated by the attention module and the decoder. The model architecture by Google. How are the two models different? The conditioning difference: In the Translatotron 2, the output from the target phoneme decoder is an input to the spectrogram synthesiser that makes the model easier to train while yielding better performance. The previous model uses the output as an auxiliary loss only. Spectrogram synthesiser: In the Translatotron 2, the spectrogram synthesiser is ‘duration based’, improving the robustness of the speech. The previous model has an ‘attention based’ spectrogram synthesiser that is known to suffer robustness issues. Attention driving: While both the models use an attention-based connection for encoding source speech, in Translatotron 2, this is driven by the phoneme decoder. This makes sure that the acoustic information seen by the spectrogram synthesiser is aligned with the translated content being synthesised and retains each speakers’ voice. To ensure the model cannot create deep fakes like through the original Translatotron, the 2.0 uses only a single speech encoder to retain the speaker’s voice. This works for both linguistic understanding and voice capture while preventing the reproduction of non-source voices. Furthermore, the team used a modified version of PnG NAT to train the model to retain speaker voices across translation. PnG NAT is a TTS model that can transfer cross-lingual voice to synthesise training targets. Additionally, Google’s modified version of PnG NAT includes a separately trained speaker encoder to ensure the Translatotron 2 can zero-shot voice transference. ConcatAug ConcatAug is Google’s proposed concatenation-based data augmentation technique to enable the model to retain each speaker’s voice in the translated speech in the case of multiple speakers in the input speech. ConcatAug “augments the training data on the fly by randomly sampling pairs of training examples and concatenating the source speech, the target speech, and the target phoneme sequences into new training examples,” according to the team. The results then contain two speakers’ voices in both the source and the target speech, and the model learns further based on these examples. Performance The performance tests verified that Translatotron 2 outperforms the original Translatotron by large margins in aspects of higher translation quality, speech naturalness, and speech robustness. Mainly, the model also excelled on Fisher corpus, a complex Spanish-English translation test. The model’s translation quality and speech quality approaches that of a strong baseline cascade system. Listen to the audio samples here. Source Language frdeescaTranslatotron 2 27.018.827.722.5Translatotron 18.910.818.813.9ST (Wang et al. 2020) 27.018.928.023.9Training Target 82.186.085.189.3 Performance on the CoVoST 2 corpus. Source: Google Additionally, along with the Spanish-to-English S2ST, the model was evaluated on a multilingual setup. Here, the input speech consisted of four different languages without the input of which language it was. The model successfully detected and translated them into English. The research team is positive this makes the Translatotron 2 more applicable for production deployment after the mitigation of potential abuse.
|
Google claims the revised version can successfully transfer voice even when the input speech consists of multiple speakers.
|
["Global Tech"]
|
["Speech Analytics"]
|
Avi Gopani
|
2021-09-30T14:00:00
|
2021
| 816
|
["Go", "data augmentation", "TPU", "Speech Analytics", "AI", "programming_languages:R", "ML", "programming_languages:Go", "Aim", "R"]
|
["AI", "ML", "Aim", "TPU", "R", "Go", "data augmentation", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/global-tech/google-upgrades-translatotron-its-speech-to-speech-translation-model/
| 4
| 9
| 1
| false
| true
| true
|
45,151
|
The Art In Data Science: From Visualisations To Storytelling
|
Data Science is a high-ranking profession that allows the curiosity to make game-changing discoveries in the field of Big Data. A report from Indeed, one of the top job sites has shown a 29% increase in demand for data scientists year over year. Moreover, since 2013, the demand has increased by a whopping 344%. So, what’s the reason for such demand? A Data Scientist’s fundamental skill is to write code. They are also advanced analysts who emphasise numbers and hidden insights in data. It makes them hardcore lovers of science and statistics. However, science is a complex branch that not everyone understands. What people understand easily, is art. Stakeholders from a non-technical background or busy business users find it hard and time-consuming to understand the science behind Data Science. More enduring would be, data scientists communicating insights in a simple language with memorable methods. Turning Data Science into Art Data scientists have understood the need for easy insight consumption. Hence, the last decade has seen tremendous growth in terms such as data visualisation, data art, and data stories. The pool of people discovering hidden insights has expanded beyond just data scientists and analysts. Data Storytellers and data artists are the new breeds, who believe in cultivating insightful stories rather than just bland insights. The ability to take data — to be able to understand it, to process it, to extract value from it, to visualise it, to communicate it — that’s going to be a hugely important skill in the next decades. Dr Hal R Varian, Chief Economist, Google Dr Hal quoted the above statement in 2006. It has been more than a decade and every word turns out to be true. Data storytellers not only play with the numbers to generate insights. They also ensure easy consumption of it. To make it happen, Data Science organisations are creating a hybrid team of creatives and analysts or creative analysts. It serves the purpose of both analysing the data and presenting the underlying story in the most appealing format possible. Statistics show why converting data into art is an intelligent way to consuming insights. MIT Survey says, 90% of the information our brain stores are visual. The same survey concludes that a human brain can process and understand any visual in just 13 milliseconds. A 1986 paper from the University of Minnesota states that our brains can process visuals 60,000 times faster than any textual or verbal context. In a survey from Wharton School of Business, only half of the audience was convinced by a verbal data presentation. Surprisingly, the numbers increased to 64% when visual language was embedded in it. The same survey also concluded that visualisations in presentations can shorten business meetings by 24%. A Nucleus Research report says that Business Intelligence (BI) with artistic data capabilities offers an ROI of $13.01 back on every dollar spent. Tools to Data Artistry There are many ways that data artists are taking the torrent of big data and transforming them into art. Data Visualisation: The basic definition of data art is data visualisation. Good visualisation acts like eye-candy and people remember it for a long time. Pie charts, histograms, line charts are traditional approaches to data visualisation. Whereas, Chords, Choropleths, Scatter plots, are new. However, they serve the same purpose, make information beautiful and visually contextualise non-obvious insights. Data Stories: Who doesn’t love stories? Stories are memorable. While data reveal surprising insights, stories make them worth consuming and memorable. The simple ingredients of a good data story are a problem, an approach, and a solution. Data storytelling companies offer actionable insights to their clients in the form of stories. The art of data storytelling comes with a range of endless creativity. Data Comic: Data Comics is a new addition to the family of data artistry. The idea is to go minimal in content and not lose focus on insights. A data comic reveals nothing but insights. Inspired by the language of comics, these are the novel way to communicate visual insights. Data comics have brought data-driven storytelling to a new edge. Data storytelling: The new playground of Data Scientists The modern-age data scientists are excellent writers and eloquent narrators. They take data storytelling as a structured approach to communicate complex data. Data, visuals, and narratives are the key elements of data storytelling. When a narrative is sprinkled on data, it helps the audience to quickly smell the importance of insights. They can quickly identify outliers and extremes from the data. An insight, no matter how small, is always important. Business users sometimes ignore a few insights calling them trivial. Narratives add ample summary and commentary and show the importance of insights. Many patterns and outliers are hidden inside the hefty rows and columns of an excel sheet. Data artists unearth these insights, beautify them, and serve them to the enterprises in a Petri dish. It accelerates decision-making in business users as they get to play with the transforming insights. The intersection of narratives, visuals, and data gives rise to better explanations of data, better consumption of insights, and better decisions. Ultimately, a well-crafted data story with all ingredients in place, drives change in organisations. And that’s how creative data scientists are using data storytelling as their new playground. Image credit: Brent Dykes Levels of Data Scientists Rising Above Code Earlier, the tools of data scientists were Excel, Python or R. But the uprising of AI and Machine Learning has significantly benefitted the process. It has also increased the demand for Data Science Professionals. In short, Advanced Analytics makes it easy to analyse big data. AI and its allies such as Deep Learning, Machine Learning, or Neural Networks are making businesses invest in them. A PWC report recently mentioned AI’s potential to add $15.7 Trillion to the Global Economy by 2030. This, in turn, will skyrocket the global economy by 14% of what we see today. This image shows 800 runs of a bicycle being pushed to the right. For each run, the path of the front wheel on the ground is shown until the bicycle has fallen over. The unstable oscillatory nature is due to the subcritical speed of the bicycle, which loses further speed with each oscillation. Image credit: Matthew Cook Firstly, it is good to see that even a trillion-dollar dream is not changing the mindset of data scientists. They are still focusing on telling insights in a memorable and interesting format. Secondly, complex technologies such as AI are now being made available to everyone through artistic approaches. Visionaries across the world are working on making AI simple and easy to use. Data art skills are helping in the process. As I mentioned earlier, people easily understand art rather than complex science. Outlook Data Scientists are now Data Storytellers, which is the most essential skill in the digital economy. Data storytellers communicate the drama hidden inside the numbers. The answer to data problems is not only insights. However, an end-to-end data consultancy accelerates decision making and informs businesses about considerable pain points. Data art and stories complete the cycle of data consultancy. If we want to make data easy for everyone, we need more data storytellers and artists than analysts and scientists.
|
Data Science is a high-ranking profession that allows the curiosity to make game-changing discoveries in the field of Big Data. A report from Indeed, one of the top job sites has shown a 29% increase in demand for data scientists year over year. Moreover, since 2013, the demand has increased by a whopping 344%. So, […]
|
["AI Features"]
|
["Business Intelligence", "Data Visualisation"]
|
Sunil Sharma
|
2019-08-29T14:00:33
|
2019
| 1,203
|
["data science", "Go", "machine learning", "AI", "neural network", "Data Visualisation", "Git", "Python", "deep learning", "analytics", "Business Intelligence", "R"]
|
["AI", "machine learning", "deep learning", "neural network", "data science", "analytics", "Python", "R", "Go", "Git"]
|
https://analyticsindiamag.com/ai-features/the-art-in-data-science-from-visualisations-to-storytelling/
| 4
| 10
| 2
| false
| false
| true
|
10,140,986
|
OpenAI Launches ChatGPT Desktop Version, Mirroring Microsoft’s Copilot
|
ChatGPT can now work with different apps on macOS and Windows desktops, OpenAI announced on X on 15 November. This marks the company’s first direct attempt at computer vision and agent control. ChatGPT 🤝 VS Code, Xcode, Terminal, iTerm2ChatGPT for macOS can now work with apps on your desktop. In this early beta for Plus and Team users, you can let ChatGPT look at coding apps to provide better answers. pic.twitter.com/3wMCZfby2U— OpenAI Developers (@OpenAIDevs) November 14, 2024 This early beta update claims to let ChatGPT examine coding apps to provide better answers for Plus and Team users. It not only assists with codes like VS Code, Xcode, Terminal, and iTerm2 but also talks to its users (through its voice assist feature), lets them take screenshots, upload files, and search the web (through SearchGPT). As reported earlier, Anthropic also made Claude Artifacts available to all users on iOS and Android, allowing anyone to create apps easily without writing a single line of code. A ChatGPT feature that becomes highly beneficial in desktop use is asking anything. Users can select any section of any document and open ChatGPT to ask for meanings, explanations, and feedback. This is a desktop implementation of ChatGPT’s most evident function. This development follows the discussions from a day ago about OpenAI’s agent, ‘Operator,’ which is to be released in January 2025. Rowan Cheung, founder of ‘The Rundown AI,’ speculates that the next step beyond this would be to allow ChatGPT to control and see desktops as an agent. OpenAI Follows Suit In October this year, Microsoft released its ‘Copilot Vision’ to transform autonomous workflows with Copilot. According to Microsoft, these autonomous agents would be the new ‘apps’ for an AI-driven world, executing tasks and managing business functions on behalf of individuals, teams, and departments. Meanwhile, the company also introduced ten new autonomous agents in Dynamics 365, to automate processes like lead generation, customer service, and supplier communication for organisations. Following that, Anthropic made a big announcement by releasing its new Claude 3.5 Sonnet which would control computers with the beta feature, ‘Computer Use’. The company had reported that the model made significant progress in agentic coding tasks, which involved AI autonomously generating and manipulating code. This approach to Anthropic Claude’s computer feature stood out extensively as it didn’t rely on multiple agents to perform different tasks; instead, a single agent managed multiple tasks. As compared by AIM earlier, Microsoft integrated Copilot into MS Excel, while Claude directly operated Excel. This called into question the existence of Copilot. OpenAI wasn’t behind, even though this move by Anthropic and others (like Google Jarvis, speculated to release this month) had created a stronghold in the AI industry. OpenAI’s focus has also shifted to interface from expanding its features. OpenAI entered this race by introducing the Swarm framework, an approach for creating and deploying multi-agent AI systems. It was the missing piece that simplified the process of creating and managing multiple AI agents helping them work together to accomplish complex tasks. Following that, the launch of ChatGPT on desktops was a major step for a pioneer in AI to transform the way this chatbot is used, only to be enhanced by ‘Operator’ in January. Now, the chatbot will be able to provide answers, be a companion, and assist with daily tasks.
|
OpenAI enters into computer vision and agent control.
|
["AI News"]
|
["ChatGPT", "OpenAI"]
|
Sanjana Gupta
|
2024-11-15T14:07:57
|
2024
| 550
|
["Anthropic", "ChatGPT", "Go", "OpenAI", "AI", "autonomous agents", "computer vision", "Aim", "Claude 3.5", "R"]
|
["AI", "computer vision", "ChatGPT", "OpenAI", "Claude 3.5", "Anthropic", "Aim", "autonomous agents", "R", "Go"]
|
https://analyticsindiamag.com/ai-news-updates/openai-launches-chatgpt-desktop-version-mirroring-microsofts-copilot/
| 2
| 10
| 2
| true
| false
| false
|
10,135,702
|
Most Successful Companies are the Ones that Pivoted
|
The origin of many big companies is not as straightforward as it seems. Finding the correct product market fit can take months or even years for many, and is sometimes farther from the original idea. The term ‘pivot’ was first publicly used by Eric Ries, an entrepreneur and author, in his book about how course correction by founders is important for success. In India, two of the leading startups, Zepto and Razorpay, pivoted in their early days. Interestingly, both these unicorns are alumni of Y Combinator, the San Francisco-based startup school, which with less than 1% acceptance rate guides founders into the right pivot. Globally as well, several YC-backed companies like Clipboard Health, Brex, Goat, and Escher Reality, among many others went through cycles of feedback at YC to reach their consumer base. “The idea maze is a perfect competition,” Garry Tan wrote on X, commenting on the recent launch of Void AI, which is interestingly, the fifth YC-backed code editor, in a market filled with AI editors. Should You Pivot Fast? Globally, the examples are plenty. However, below are a few companies that pivoted within a year of launch. Instagram, a social photo-sharing app, founded by Kevin Systrom and Mike Krieger in 2010, initially began as Burbn, a location-sharing app where people could check in and upload photos. However, in a year’s time the founders pivoted to solely focus on photo sharing, its chief and most-used feature. Instagram reported over two billion monthly active users as of early this year. Acquired later by Facebook (now Meta), the journey of Instagram – both in its pivot and acquisition – is a masterclass in strategy. Also, Twitter was originally a podcasting company called Odeo. It is interesting to note the social network’s evolution from that to what it is today. The launch of iTunes rendered the business model of Odeo useless, forcing the founders to build on a new idea. In October 2022, Elon Musk acquired Twitter, and rebranded it as X. Slack, a cloud-based communication platform for enterprises, was initially founded as an online gaming company called Glitch. Due to the lack of commercial traction, the founders decided to build on the chat feature, which was underrated at that time. In a way, Slack was the result of a Glitch! YouTube, a leading online video platform, found its start as a dating site where people uploaded videos talking about their partners. But within a week, it realised that the idea was not very unique. By generalising the core product beyond just dating videos, their internal tech was used to creating the video-sharing app. This was one of the first videos uploaded on YouTube, posted by one of the founders, Jawed Karim. WhatsApp, the messaging giant, also has a similar story which started as an app merely for sharing statuses with friends. PayPal, an online payments platform, started out as an encryption services application known as Confinity. Its journey to what it is today included not one but multiple pivots. Later, eBay acquired PayPal in a deal valued at $1.5 billion. Some Took More than a Year For instance, Hugging Face, an AI and machine learning collaborative platform, began as an entertainment app. After two-and-a-half years, the founder pivoted by launching a model he was working on. This one immediately gained traction. Notion, the all-in-one productivity platform, had its origins as a website builder. With a very unsuccessful start, the founders took four years to pivot and build on its collaborative feature. The founders fired the team and relocated to Kyoto to rebuild the app from scratch. Twitch, a famous streaming platform, initially began as a 24-hour reality TV webcaster, streaming Justin Kan’s life. It took the founders five years to double down on the games and streaming aspect of the startup to differentiate it from the rest. The core product was applied to a different problem. Later, Amazon acquired Twitch for nearly $1 billion. The Trend Continues Among Startups Since the launch of OpenAI’s ChatGPT in 2022, a lot of startups have pivoted towards building an AI-first product or solution. Earlier this year, SoftBank signalled its pivot towards becoming AI-focused, after funding several AI-driven startups in the ecosystem. Funding for AI startups in India totalled $8.2 million in the April-June 2024 quarter. Despite the enthusiasm, a lot of AI startups get acquired by a larger corporation, due to the challenges like funding. In order to stand out, it is imperative for startups to focus on building the next LLM instead of building existing use cases of AI. A lot of founders go through the dilemma and decision-making of either merging, pivoting, or getting acquired when their startup does not perform well in the initial days. Pivoting May Not Always Be The Answer As seen above, while pivoting is considered normal, and sometimes even healthy for startups, there are many times when founders should be cautious of it. “If you pivot over, and over, and over again, it causes whiplash. Whiplash is very bad because it causes founders to give up and not want to work on this anymore, and that actually kills the company. Weirdly, it’s more deadly to your company to get whiplash and get sad than to work on a bad idea,” said Dalton Caldwell, partner at Y Combinator. There is even a term for it within the YC community, known as ‘Pivot Hell’, which founders must avoid it at all costs.
|
“The idea maze is a perfect competition.”
|
["AI Features"]
|
["Startups"]
|
Aditi Suresh
|
2024-09-18T19:12:27
|
2024
| 904
|
["Go", "ChatGPT", "Hugging Face", "machine learning", "OpenAI", "AI", "GPT", "CLIP", "GAN", "Startups", "R"]
|
["AI", "machine learning", "ChatGPT", "OpenAI", "Hugging Face", "R", "Go", "GPT", "CLIP", "GAN"]
|
https://analyticsindiamag.com/ai-features/most-successful-companies-are-the-ones-that-pivoted/
| 3
| 10
| 5
| true
| true
| false
|
10,115,679
|
Oracle Enhances Cloud Suite with Additional AI Features for Key Business Areas
|
Oracle at its flagship event Oracle CloudWorld London has announced the integration of new generative AI capabilities into its Oracle Fusion Cloud Applications Suite, a move set to significantly enhance decision-making and user experiences across various business domains. The suite now includes over 50 generative AI use cases, built on Oracle Cloud Infrastructure (OCI) and designed to respect enterprise data, privacy, and security. These capabilities are embedded within the business workflows of finance, supply chain, HR, sales, marketing, and service, aiming to boost productivity, reduce costs, and improve both employee and customer experiences. “We have been using AI in our applications for several years and now we are introducing more ways for customers to take advantage of generative AI across the suite,” said Steve Miranda, executive vice president of applications development at Oracle. “With additional embedded capabilities and an expanded extensibility framework, our customers can quickly and easily take advantage of the latest generative AI advancements.” In the realm of Enterprise Resource Planning (ERP), the suite now includes insight narratives for anomaly and variance detection, management reporting narratives for finance professionals, predictive forecast explanations, and generative AI-powered project program status summaries and project plan generation. For Supply Chain & Manufacturing (SCM), the suite offers item description generation to aid product specialists and supplier recommendations to streamline procurement processes. Additionally, negotiation summaries are now generated more efficiently with AI assistance. Human Capital Management (HCM) benefits from job category landing pages for better candidate engagement, job match explanations to assist candidates in finding suitable roles, a candidate assistant for common inquiries, and manager survey generation for timely employee feedback. Customer Experience (CX) is enhanced with service webchat summaries for call center agents, assisted authoring for sales content to improve productivity, and generative AI for marketing collateral to optimize audience engagement. Oracle’s approach ensures that no customer data is shared with large language model providers or seen by other customers. Role-based security is also embedded directly into workflows, ensuring that only entitled content is recommended to end users. The generative AI capabilities are expected to have a profound impact on customers and industries by streamlining operations and enabling more efficient and informed decision-making processes.
|
The suite now includes over 50 genAI use cases, built on OCI and designed to respect enterprise data, privacy, and security.
|
["AI News"]
|
["Oracle"]
|
Shyam Nandan Upadhyay
|
2024-03-14T15:57:57
|
2024
| 361
|
["Go", "API", "programming_languages:R", "AI", "ML", "programming_languages:Go", "Oracle", "Aim", "ViT", "generative AI", "R"]
|
["AI", "ML", "generative AI", "Aim", "R", "Go", "API", "ViT", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-news-updates/oracle-enhances-cloud-suite-with-additional-ai-features-for-key-business-areas/
| 2
| 10
| 3
| false
| false
| false
|
10,093,868
|
Indian Govt to Soon Launch Generative AI Services
|
At AWS India Summit, held in Mumbai, last week, the cloud giant told AIM that the Indian government is working on generative AI, actively exploring the potential use cases across departments and initiatives. This will unfold in the coming months. In February this year, nearly four months after the launch of ChatGPT, it was reported that the Ministry of Electronics and IT (MeitY) will be integrating ChatGPT with WhatsApp to help Indian farmers understand and learn about several government schemes. In fact, Ashwini Vaishnaw, Minister for Electronics and IT, has revealed that the Indian government is already working on something similar to ChatGPT. Large Language Models (LLMs) like GPT3.5 and GPT-4 by OpenAI, which powers ChatGPT holds immense potential for the Indian government in multiple areas, including the delivery of government schemes and services, as well as administration purposes. By leveraging generative AI, the government can automate and enhance the process of providing essential schemes and services to citizens. The technology can assist in streamlining administrative tasks, improving efficiency, and reducing manual effort. With LLMs, the government can develop intelligent systems that can understand user queries, provide accurate information, and even generate personalised responses. ChatGPT in Indic Languages However, one of the challenges of using LLMs is that these models are trained on a vast amount of English data, and perform poorly when prompted in non-English languages. This is where Bhasini comes in, an initiative to create large datasets for Indic Languages. Bhasini, which is an initiative of AI4Bharat and IIT Madras, was announced by Prime Minister Narendra Modi while inaugurating the Digital India Week 2022 event in Gandhinagar. The IndicTrans translation model by Bhasini is already being used for translation for other initiatives such as KissanAI (renamed from KissanGPT). “As part of Bhashini, we are developing the platform to enable all the things to enrich the Indic language AI models for various tasks like Translation, Speech to Text, Text to Speech, Image to Text etc. All government reports/materials/communications can be generated in all the official languages. The core idea is to stop language being a barrier for any industry,” Aravinth Bheemaraj, engineering leader, Tarento, told AIM. JugalBandi, a multilingual AI chatbot Further, at this year’s Microsoft Build Conference held in Seattle, Microsoft showcased how a Generative AI-driven multilingual chatbot developed in India is already used by citizens in rural areas to access government services. Called Jugalbandi, the chatbot can comprehend inquiries in various languages, whether they are spoken or typed. The system retrieves pertinent programme details, typically documented in English, and delivers them in the native language of the user. Abhigyan Raman, a project officer at AI4Bharat said that the chatbot, which is powered by GPT models, understands the user’s exact problem in their languages and then tries to deliver the right information reliably and cheaply, even if that exists in some other language in a database somewhere. Currently, the chatbot, which can also be accessed through WhatsApp, is available in 10 of the 22 official languages and 171 of a total of approximately 20,000 government programmes. According to Microsoft, Vandna, an 18-year-old resident of Biwan, Haryana had the opportunity to test the chatbot. In early April, when Jugalbandi was introduced to the people in her village by community volunteers, Vandna decided to interact with the chatbot. She typed her question in Hindi, asking, “What scholarships are available for me?” Along with her question, she mentioned her field of study, which includes Political Science, Hindi, and History. The chatbot responded by providing a comprehensive list of central and state government programmes that offer scholarships. Vandna selected one of the options and inquired about the eligibility criteria. Jugalbandi promptly provided her with the necessary information and also informed her about the supporting documents required for the application process. While Jugalbandi, merely offers us a glimpse of the immense potential of LLMs, in the upcoming months or years, as the technology continues to mature, LLMs could become ubiquitous. They could power various government chatbots such as MyGov Helpdesk, Umang Chatbot, DigitBot, CoWin Chatbot, and AskDISHA, enabling seamless and intelligent interactions between citizens and government services. Moreover, LLMs would assist government servants in performing administrative tasks with greater efficiency, transforming the way bureaucratic work is undertaken.
|
During this year’s Microsoft Build Conference, Microsoft demonstrated the implementation of a Generative AI-powered multilingual chatbot developed in India
|
["AI Highlights"]
|
["AI4Bharat", "bhasini"]
|
Pritam Bordoloi
|
2023-05-24T16:00:00
|
2023
| 704
|
["ChatGPT", "OpenAI", "AI", "chatbots", "AWS", "ML", "RAG", "bhasini", "Aim", "generative AI", "AI4Bharat", "R"]
|
["AI", "ML", "generative AI", "ChatGPT", "OpenAI", "Aim", "RAG", "chatbots", "AWS", "R"]
|
https://analyticsindiamag.com/ai-highlights/indian-govt-to-soon-launch-generative-ai-services/
| 2
| 10
| 1
| false
| false
| false
|
10,047,588
|
38 Billion Reasons Why Databricks Is The Next Big Thing For Indian Enterprise AI
|
“More than 5,000 organisations worldwide, and over 40% of the Fortune 500 rely on the Databricks Lakehouse Platform.” On Tuesday, the data and AI company Databricks announced a $1.6 billion, bringing the total funding to almost $3.6B. The Series H funding, led by Morgan Stanley, puts Databricks at a record $38 billion post-money valuation. Founded by the creators of Apache Spark, Databricks is an enterprise software company known for developing widely used projects such as Delta Lake, MLflow, and Koalas for data engineers and data scientists. Databricks develops a web-based platform for working with Spark that provides automated cluster management and IPython-style notebooks. Known as the world’s first lakehouse platform in the cloud, Databricks has pioneered an open and unified architecture for data and AI, which brings the reliability, governance, and performance of a data warehouse directly to the data lakes that most organisations already store all of their data in. Databricks tools allow customers to build data lakehouses on AWS, Microsoft Azure, and Google Cloud to support every data and analytics workload on a single platform. Organisations no longer have to worry about architectural complexity and infrastructure costs. (Image credits: Crunchbase) Today, hundreds of leading organisations around the world are using the Databricks Lakehouse Platform. Few of the top features that contribute to Databricks successful run: Enables seamless migration for data deployment and orchestration on multiple cloud platforms.Enables ease cross team collaboration between data engineers and data scientists.Offers a centralised, consistent view of customer segments at scale.Ease of developing a feature store to accommodate ML ready data pipelines.Ease of building a common data model with Databricks Delta Lake, irrespective of the data source.Data lakehouse avoids functionality limitations of data warehouses and data lakes. Also Read: Why Is Databricks Gaining Traction? Databricks’ Foothold In India Databricks set foot on Indian soil back in 2017. Since then, the company has grown at a brisk pace while attracting a talented pool of data specialists. Today, the company’s data teams support 5,000+ global customers. Databricks’ rise in popularity also coincides with India’s rapid pace of digital transition. India started out as a testbed for outsourcing and backend services for global companies is now quickly moving to be the centre for startups, fintech, and software development. These new-age startups and data-rich legacy organisations need tools that allow them to tap into the data they have at hand. Databricks’ custom made lakehouse and other ML tools are the right fit for many such use cases. Let’s take a look at how Databricks helped India Inc. to leverage data: 1| Viacom18 on Databricks Viacom18 Media is one of India’s fastest-growing entertainment networks that offer multi-platform brand experiences to 600+ million monthly viewers. With Databricks, Viacom18 has streamlined their infrastructure management, increased data pipeline speeds, and productivity among their data teams. With millions of consumers across India, the team at Viacom18 had to ingest and process over 45,000 hours of daily content, which easily generated 700 GB to 1TB of data per day. Viacom18, which was originally using Hadoop services, failed to process 90 days of rolling data optimally. Switching to Azure Databricks helped the Indian media giant “slice and dice” tonnes of data and deliver customer insights to their data teams. According to Parijat Dey, Viacom18’s AVP of Digital Transformation and Technology, Azure Databricks has remarkably streamlined processes and improved productivity by an estimated 26%. 2| MakeMyTrip on Databricks MakeMyTrip (MMT) has nearly 45 million monthly active users. The company uses machine learning for its personalised recommender systems. MMT started incorporating Databricks into its toolstack for seamless migration for data deployment and orchestration as the business scaled. According to Piyush Kumar, head of data platform engineering at MakeMyTrip, the Databricks platform consolidates all user data so that the teams can have a single, unified view for richer insights. https://twitter.com/databricks/status/1428156512879448072?s=20 Databricks is actively hiring for various data-related roles for their Bengaluru and Mumbai offices. These roles range from data engineers to Data & AI sales specialists for multiple domains. As the data architecture’s popularity across data-driven organisations continues to grow rapidly, Databricks will look to capitalise, and thanks to its record funding, it can now expand its reach in countries like India, where the enterprise AI ecosystem is in its nascent stage. Snowflake, which is considered one of Databricks’ top competitors, opened its shop in India last year during the pandemic and still has to gain footing. Without a rival in sight and a burgeoning data awareness amongst Indian companies, Databricks is well-positioned to run riot in the exploding enterprise AI scene in India.
|
“More than 5,000 organisations worldwide, and over 40% of the Fortune 500 rely on the Databricks Lakehouse Platform.” On Tuesday, the data and AI company Databricks announced a $1.6 billion, bringing the total funding to almost $3.6B. The Series H funding, led by Morgan Stanley, puts Databricks at a record $38 billion post-money valuation. Founded […]
|
["AI Trends"]
|
["Apache Spark", "how to measure twitter influence", "load data python"]
|
Ram Sagar
|
2021-09-03T15:00:00
|
2021
| 757
|
["load data python", "machine learning", "AWS", "AI", "how to measure twitter influence", "ML", "Apache Spark", "RAG", "analytics", "MLflow", "Azure", "Snowflake"]
|
["AI", "machine learning", "ML", "analytics", "MLflow", "RAG", "AWS", "Azure", "Apache Spark", "Snowflake"]
|
https://analyticsindiamag.com/ai-trends/databricks-38-billion-valuation-india-enterprise-ai/
| 4
| 10
| 3
| true
| false
| false
|
10,007,076
|
The Solution Approach Of Winners Of Product Sentiment Classification Hackathon
|
MachineHack successfully conducted its eighteenth installment of the weekend hackathon series this Monday. The Product Sentiment Classification: Weekend Hackathon #19 hackathon provided the contestants with an opportunity to develop a machine learning model to accurately classify various products into four different classes of sentiments based on the raw text review provided by the user. Data science enthusiasts greatly welcomed the hackathon with over 257 registrations and active participation from close to 92 practitioners. Out of the 257 competitors, three topped our leaderboard. In this article, we will introduce you to the winners and describe the approach they took to solve the problem. #1| Prashant Arora Prashant has had an amazing journey having participated in several competitions and hackathons on various platforms similar to Machine Hack. While he has learnt many things on the way, the main learning was to try and execute different codes and snippets in different use cases and datasets. After a lot of practice and perseverance, he can now analyze and generalize a dataset in a better way. Prashant started his competition with the HackerEarth platform when he was still new to the field and got a poor ranking. But with participation in more competitions, he gradually began to learn various tools and techniques to achieve this feat with the MachineHack hackathon finally. Approach to solve the problem The task was to predict sentiments based on the product description. As they were provided with a small dataset, hence using pre-trained models was the only scope for the competition. Hence, he searched about it on the web and found an amazing library named sentence-transformers. With the help of this, he used the pre-trained Roberta Large and Roberta base model, modified by Facebook Research. He created two simple data frames of embeddings generated from both large and base models. He then trained separate cat-boost models on both of these data frames, and simply an average ensemble resulted in the best score. “MachineHack has always been at the top of my work list, especially as it’s weekend hackathons. These small-time competitions have raised a competitive feeling in our minds and have helped to improve ourselves much more.” – Prashant shared his opinion about MachineHack. Checkout Prashant’s Solution here #2| Yash Kashyap Yash was introduced to Data Science during his second year of college, which is almost a year back from now. He had a gradual start with familiarity to just three terms – Data Science, Machine Learning, and Linear Regression. Fortunately, he found a helping senior, who guided and helped him find sources to learn the basics. During the lockdown period, he went from zero to whatever he is right now. He started participating in competitions from May and started to feel confident about his skills. Since then, he has not looked back. Currently, he is exploring Neural Networks and Deep Learning Techniques and deeply enjoys what he does. Approach to solve the problem He used the Roberta-Large model for generating the word embeddings. Out of the created word embedding, he created 10 PCA components. Using some newly created features, he trained the CatBoost model to get the current score on the leaderboard. “I had a great experience in MachineHack so far. I started from being in the bottom and learned a lot from the solution that is posted on GitHub. I now really feel great to find myself in a respectable position in the leaderboard.” – Yash shared his opinion. Checkout Yash approach here #3| Snehan Kekre Snehan started his data science journey in his sophomore year of undergrad at Minerva Schools in San Francisco. After learning about AI and ML in academia, he joined Rhyme.com as a subject matter expert in data science where he was tasked with creating hands-on educational content on data science and machine learning. Since the acquisition of Rhyme.com by Coursera in 2019, he is now a Data Science and ML instructor at Coursera with over 70,000+ learners. He occasionally participates in data science competitions when the topics are interesting, provided he can make time. Approach to solve the problem His approach was very bare-bones and minimal. The network architecture was inspired by “Wide & Deep Learning: Better Together with TensorFlow“, where the text features pass through the deep parts of the network, while the categorical features make up the wide part of the network. Leveraging this paradigm of transfer learning, he made use of The Universal Sentence Encoder (pre-trained) model from TensorFlow Hub. It encodes text into high-dimensional vectors that can be used for text classification and other NLP downstream tasks. It takes care of all the text-preprocessing and was trained on a very large corpus. So rather than learning embeddings from scratch, he leveraged the massive amount of computing by someone else and simply loaded the universal sentence encoder as an ordinary Keras layer. The raw sentences are fed into this embedding layer, generating 512-dimensional outputs corresponding to sentence embeddings. These embeddings are fed through a couple of dense layers with dropout regularisation. The categorical feature was one-hot encoded and concatenated with the output from the embedding to dense layers. This model was then fit on 90% of training data and validated on the remaining 10%. After noting the evaluation metrics, he re-trained the model on all the data such that the number of training epochs corresponded to the one that resulted in the lowest validation loss in the previous run. Using this trained model, he obtained the predictions on the test data and submitted it. He was working over the weekend, so couldn’t make time for hyperparameter optimizations and feature engineering. “MachineHack is an amazing platform, especially for beginners. The community is constantly growing, with new and brilliant participants competing. I intend to continue using MachineHack to practice and refresh my knowledge of data science”. ” – Snehan shared his opinion. Checkout Snehan’s Solution here
|
MachineHack successfully conducted its eighteenth installment of the weekend hackathon series this Monday. The Product Sentiment Classification: Weekend Hackathon #19 hackathon provided the contestants with an opportunity to develop a machine learning model to accurately classify various products into four different classes of sentiments based on the raw text review provided by the user. Data […]
|
["Deep Tech"]
|
["Hackathon Winners", "Machinehack Hackathon", "machinehack winners", "tensorflow gradient"]
|
Anurag Upadhyaya
|
2020-09-09T18:00:43
|
2020
| 974
|
["data science", "machine learning", "Keras", "AI", "neural network", "TensorFlow", "ML", "NLP", "deep learning", "tensorflow gradient", "Machinehack Hackathon", "CatBoost", "machinehack winners", "Hackathon Winners"]
|
["AI", "machine learning", "ML", "deep learning", "neural network", "NLP", "data science", "TensorFlow", "Keras", "CatBoost"]
|
https://analyticsindiamag.com/deep-tech/the-solution-approach-of-winners-of-product-sentiment-classification-hackathon/
| 4
| 10
| 0
| false
| true
| false
|
46,115
|
Highlights from Cypher 2019 | The Event Which Defines The AI & Analytics Industry
|
It is exciting times in AI, so fasten your seatbelts — this was the reigning sentiment at Cypher 2019 that opened with a record number of 900+ delegates attending the premier analytics and AI summit. The flagship conference serves as an excellent platform to network and learn from leading AI thought leaders, fellow practitioners and researchers. We bring you highlights from Day 1 — quite an action-packed day with 29 sessions spanning an array of Keynotes, Tech Talks and Knowledge Talks held by 40+ speakers — all this just on Day 1. Cypher 2019 kicked off with a keynote by AI luminary and global business leader Vikram Mahidhar Senior Vice President – Artificial Intelligence Solutions at Genpact with a compelling narrative around the next wave of AI-augmented intelligence. The keynote touched upon some of the toughest challenges AI is solving in the real world and how Genpact is delivering real value to businesses around the globe. This was followed by a talk by SAP’s VP & Chief Evangelist APJ & GC Shailendra Kumar who shared insights on how AI creates value in the experience economy and allows businesses to gain a new level of visibility into data. Also taking the keynote stage was enterprise leader Deep Thomas Group Chief Data and Analytics Officer, Aditya Birla Group who’s played a pivotal role in supercharging India’s multinationals. Thomas shared his vision around build a lean organisation and “making the move from organisations as machines to organisations as organisms”. In this journey, leaders play an important role in showing the right direction and enabling action. Thomas also added how ABG is one of the few multinationals that welcomes both industry giants — AWS & Google. Bridgei2i’s Venkat Subramaniam laid down the “Value Roadmap” for AI for businesses and how the enterprise AI ecosystem is set to boom in India. Up next, Bhavik Gandhi from Shaadi.com who held a talk on “Are marriages made in heaven or do algos decide who we love” made the participants question their dating website profiles and their preferences to swipe left or right and in addition to that, gave his insights how an algorithm perceives a profile to be fake or not. He quoted an example, “Usually most of the fraudsters claim on Shaadi.com profile that they don’t have any siblings while the other way round may not necessarily be true but our algorithms have found out that fraudsters and scammers ask for money on the basis that they have no immediate family to rely on,” he said. We had a line up of tech talks by Jigsaw, Evalueserve, Ericsson, Aditya Birla Group, Praxis Business School and MiQ Digital that took attendees through some of the toughest challenges in getting machine learning projects off the ground, such as finding and labeling solid training data. The day closed with a very interesting debate between Chris Arnold of Wells Fargo and Anshu Sharma Raja of Standard Chartered Bank with Arnold dubbing AI as lumpy. Meanwhile, Raja believes AI has changed our lives significantly.
|
It is exciting times in AI, so fasten your seatbelts — this was the reigning sentiment at Cypher 2019 that opened with a record number of 900+ delegates attending the premier analytics and AI summit. The flagship conference serves as an excellent platform to network and learn from leading AI thought leaders, fellow practitioners and […]
|
["AI Features"]
|
[]
|
AIM Media House
|
2019-09-18T18:53:18
|
2019
| 503
|
["Go", "machine learning", "artificial intelligence", "AWS", "AI", "Git", "Ray", "Aim", "analytics", "R"]
|
["AI", "artificial intelligence", "machine learning", "analytics", "Aim", "Ray", "AWS", "R", "Go", "Git"]
|
https://analyticsindiamag.com/ai-features/highlights-from-cypher-2019-the-event-which-defines-the-ai-analytics-industry/
| 3
| 10
| 3
| false
| true
| false
|
27,425
|
Machine Learning Is Chasing Out DDoS, The Newest Evil In Cyber Security
|
One of the most dangerous aspects looming the computer world is security threats. It is estimated that around three trillion dollars are lost in cyber crimes every year. This figure is expected to double by 2021. With all of these threats lurking around, it is difficult to track and eliminate every threat, especially as the number of users is rising exponentially. The most popular among the existing cyber threats now is the distributed denial of service (DDoS) attack. DDoS attacks have adversely affected businesses on a large scale. Now, with machine learning prevailing in the tech ecosystem, eliminating DDoS attacks has found a new way. Session Initiation Protocol (SIP) And Voice Over Internet Protocol (VoIP) With the growing number of digital devices and the abundant availability of the internet, VoIP is the preferred method for voice and multimedia communications. In order to establish a VoIP session, Session Initiation Protocol (SIP) is the popular means of initiating and these sessions. A simple version of the SIP/VoIP architecture is given below: User Agent (UA): The active entities in the session which represent the endpoints of SIP. For example, in the context of voice communications, the caller and the receiver, which denote the endpoints in the session. SIP Proxy Server: An intermediate entity which acts as a client and a server simultaneously during the session. The role of this server is to maintain send and receive requests as well as transfer information to and fro from the users. Registrar: This component takes care of authentication and register requests for the UA. All of the SIP communication is logged by the VoIP provider. This is important because it gives out billing and accounting information for service providers based on users’ activity. Interestingly, it can also give out information regarding intrusion or suspicious activity present in the network. This can be a breeding ground for DDoS attacks if left neglected. Aggregating ML Techniques In VoIP The researchers consider the same SIP VoIP architecture and use five standard ML classifier algorithms in their experiments, which are as follows: Sequential minimal optimisation Naive Bayes Neural networks Decision trees Random Forest These algorithms are set up for dealing with communications directly in the experiment. Then, classification features are generated once the network is made anonymous using keyed-hash method authentication code (HMAC) for the VoIP communications. The algorithms are tested under 15 DDoS attack scenarios. In order to do this, a ‘test bed’ of DDoS simulations is designed by the researchers which is shown below: DDoS simulation test-bed (Image courtesy: Z Tsiatsikas and researchers) “Three or four different Virtual Machines (VMs) have been used for the SIP proxy, the legitimate users, and the generation of the attack traffic depending on the scenario. All VMs run on an i7 processor 2.2 GHz machine having 6GB of RAM. For the SIP proxy, we employed the widely known VoIP server Kamailio (kam, 2014). We simulated distinct patterns for both legitimate and DoS attack traffic using sipp v.3.21 and sipsak2 tools respectively. Furthermore, for the simulation of DDoS attack, the SIPp-DD tool has been used. The well-known Weka tool has been employed for ML analysis.” Training and Testing process for algorithms include both normal traffic and attack traffic. To simulate the attack traffic, they use a range of random high call rates to give a feel of real VoIP whereas the normal traffic has normal, observed call rates. The training scenario in the experiment is denoted as SN1 and testing scenarios are denoted as SN1.1, SN1.2, SN1.3 etc. A detailed description is given here.
|
One of the most dangerous aspects looming the computer world is security threats. It is estimated that around three trillion dollars are lost in cyber crimes every year. This figure is expected to double by 2021. With all of these threats lurking around, it is difficult to track and eliminate every threat, especially as the […]
|
["AI Features"]
|
["classifiers", "Cyber Security", "internet", "Machine Learning", "random forest"]
|
Abhishek Sharma
|
2018-08-19T04:01:25
|
2018
| 591
|
["Go", "Cyber Security", "machine learning", "programming_languages:R", "AI", "neural network", "ML", "Machine Learning", "Git", "programming_languages:Go", "random forest", "classifiers", "ViT", "R", "internet"]
|
["AI", "machine learning", "ML", "neural network", "R", "Go", "Git", "ViT", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-features/machine-learning-chasing-out-ddos-cyber-security/
| 3
| 10
| 1
| false
| true
| true
|
10,138,856
|
Microsoft Launches Inference Framework to Run 100B 1-Bit LLMs on Local Devices
|
Microsoft has launched BitNet.cpp, an inference framework for 1-bit large language models, enabling fast and efficient inference for models like BitNet b1.58. Earlier this year, Microsoft published an extensive paper on 1-bit LLMs. The framework offers a suite of optimised kernels that currently support lossless inference on CPU, with plans for NPU and GPU support in the future. The crux of this innovation lies in the representation of each parameter in the model, commonly known as weights, using only 1.58 bits. Unlike traditional LLMs, which often employ 16-bit floating-point values (FP16) or FP4 by NVIDIA for weights, BitNet b1.58 restricts each weight to one of three values: -1, 0, or 1. This substantial reduction in bit usage is the cornerstone of the proposed model. It performs as well as the traditional ones with the same size and training data in terms of end-task performance. The initial release is optimised for ARM and x86 CPUs, showcasing significant performance improvements. On ARM CPUs, speedups range from 1.37x to 5.07x, particularly benefiting larger models. Energy consumption is also reduced, with decreases of 55.4% to 70.0%. On x86 CPUs, speedups vary from 2.37x to 6.17x, alongside energy reductions between 71.9% to 82.2%. Notably, BitNet.cpp can run a 100B BitNet b1.58 model on a single CPU, achieving processing speeds comparable to human reading, at 5-7 tokens per second. BitNet.cpp supports a variety of 1-bit models available on Hugging Face and aims to inspire the development of additional 1-bit LLMs in large-scale settings. The tested models are primarily dummy setups used to illustrate the framework’s capabilities. A demo showcasing BitNet.cpp running a BitNet b1.58 3B model on Apple M2 is available for review. The project timeline indicates the 1.0 release occurred on October 17, 2024, alongside prior advancements in 1-bit transformers and LLM scaling. The installation process for BitNet.cpp requires Python 3.9, CMake 3.22, and Clang 18. For Windows users, Visual Studio 2022 is necessary, with specific options selected during installation. Debian/Ubuntu users can utilise an automatic installation script for convenience. The repository can be cloned from GitHub, and dependencies can be installed via conda. Usage instructions detail how to run inference with the quantised model and conduct benchmarks. Scripts are provided for users to benchmark their models effectively, ensuring the framework’s versatility in various applications. This project builds on the llama.cpp framework and acknowledges contributions from the open-source community, particularly the T-MAC team for their input on low-bit LLM inference methods. More updates and details about future enhancements will be shared soon.
|
BitNet.cpp can run a 100B BitNet b1.58 model on a single CPU, achieving processing speeds comparable to human reading, at 5-7 tokens per second.
|
["AI News"]
|
["Microsoft"]
|
Siddharth Jindal
|
2024-10-18T15:38:53
|
2024
| 417
|
["Hugging Face", "AI", "innovation", "Transformers", "Git", "Python", "Aim", "llm_models:Llama", "GitHub", "R", "Microsoft"]
|
["AI", "Aim", "Hugging Face", "Transformers", "Python", "R", "Git", "GitHub", "innovation", "llm_models:Llama"]
|
https://analyticsindiamag.com/ai-news-updates/microsoft-launches-inference-framework-to-run-100b-1-bit-llms-on-local-devices/
| 3
| 10
| 0
| false
| true
| false
|
43,160
|
Indian-Origin Scientists Develop New AI System To Stop Deepfake Videos
|
Dr Amit Roy Chowdhury, professor of electrical and computer engineering, University of California, Riverside. (Image source: UCR) With advanced image journaling tools, one can now easily alter the semantic meaning of images by using manipulation techniques like copy clone, object splicing/removal, which can mislead the viewers. One of the gravest and notorious examples of this sort of tampering is deepfakes. At a time when these videos are threatening the privacy of users, a team led by an Indian-origin scientist has developed an artificial intelligence-driven deep neural network that can identify manipulated images at the pixel level with high precision. Amit Roy-Chowdhury, professor of electrical and computer engineering at the University of California, Riverside, has developed a high-confidence manipulation localisation architecture which utilises resampling features, LSTM cells, and an encoder-decoder network to segment out manipulated regions from non-manipulated ones. Speaking about his momentous work, Roy-Chowdhury said, “We trained the system to distinguish between manipulated and nonmanipulated images, and now if you give it a new image it is able to provide a probability that that image is manipulated or not, and to localise the region of the image where the manipulation occurred.” He added that while they are currently working on still images, this discovery can also help them detect deepfake videos. “If you can understand the characteristics in a still image, in a video it’s basically just putting still images together one after another,” Roy-Chowdhury said. “The more fundamental challenge is probably figuring out whether a frame in a video is manipulated or not,” he added. The researchers have discovered that even a single manipulated frame would raise a red flag, in this case. But Roy-Chowdhury thinks that they still have a long way to go before automated tools can detect deepfake videos in the wild. “It’s a challenging problem… This is kind of a cat and mouse game. This whole area of cybersecurity is in some ways trying to find better defence mechanisms, but then the attacker also finds better mechanisms,” he said.
|
With advanced image journaling tools, one can now easily alter the semantic meaning of images by using manipulation techniques like copy clone, object splicing/removal, which can mislead the viewers. One of the gravest and notorious examples of this sort of tampering is deepfakes. At a time when these videos are threatening the privacy of users, […]
|
["AI News"]
|
["deep fake", "DeepFake", "deepfakes"]
|
Prajakta Hebbar
|
2019-07-23T13:28:07
|
2019
| 334
|
["Go", "artificial intelligence", "programming_languages:R", "AI", "neural network", "deepfakes", "programming_languages:Go", "deep fake", "LSTM", "DeepFake", "R"]
|
["AI", "artificial intelligence", "neural network", "R", "Go", "LSTM", "deepfakes", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-news-updates/indian-origin-scientists-develop-new-ai-system-to-stop-deepfake-videos/
| 3
| 9
| 0
| false
| true
| false
|
10,095,923
|
Oracle Announces GenAI Capabilities in HR to Boost Productivity
|
Oracle has introduced generative AI-powered capabilities to its Oracle Fusion Cloud Human Capital Management platform. These capabilities, based on Oracle’s Cloud Infrastructure (OCI) generative AI service, aim to streamline HR processes and improve efficiency for candidates, employees, managers, and recruiters. Oracle’s generative AI capabilities in Oracle Cloud HCM leverage OCI’s AI services, ensuring high levels of security, performance, and business value. Built-in prompts guide users to achieve better results while minimising factual errors and biases. Customers have control over the data used by the generative AI models, ensuring the safety of sensitive and proprietary information. The embedded generative AI capabilities in Oracle Cloud HCM offer functionalities such as Assisted Authoring, Suggestions, and Summarization. Assisted Authoring enables employees, managers, and HR leaders to create content easily, saving time and improving productivity. Suggestions provide guidance to users based on natural language processing and best practices, improving task completion speed and accuracy. Summarisation helps increase efficiency by extracting key insights from multiple data sources. Oracle Cloud HCM generative AI services are powered by OCI, offering high-performance AI innovation while maintaining end-to-end security and customer data ownership. Oracle Cloud HCM, designed for the cloud, connects all HR processes and provides a single source of truth for HR teams, enabling informed people strategies and improving business operations. The integration of new use cases aims to enable organisations to embrace continuous innovation, improve HR processes, and increase productivity. Experts believe that the aim of automation in HR is to enhance service experiences and optimize resources, rather than eliminate jobs. AI-powered service desks can handle repetitive tasks, freeing up HR professionals to focus on complex issues that require empathy and critical thinking. By automating routine tasks, HR professionals can dedicate more time to strategic initiatives like talent development and workforce planning. In addition to Oracle’s initiatives, other companies like Rezolve AI have also implemented large language models like ChatGPT to automate employee support. Zoho Corporation also supports automation which aims to enhance service experiences and optimize resources, allowing HR professionals to focus on complex issues that require empathy and critical thinking. The integration of AI tools and technologies in HR workflows presents opportunities for professionals to upskill and embrace innovative roles in the field, transforming the HR landscape.
|
Oracle has introduced generative AI-powered capabilities to its Oracle Fusion Cloud Human Capital Management platform. These capabilities, based on Oracle’s Cloud Infrastructure (OCI) generative AI service, aim to streamline HR processes and improve efficiency for candidates, employees, managers, and recruiters. Oracle’s generative AI capabilities in Oracle Cloud HCM leverage OCI’s AI services, ensuring high levels […]
|
["AI News"]
|
["Oracle", "zoho"]
|
Shyam Nandan Upadhyay
|
2023-06-29T10:45:10
|
2023
| 371
|
["zoho", "ChatGPT", "API", "AI", "ML", "Oracle", "RAG", "GPT", "Aim", "generative AI", "GAN", "R"]
|
["AI", "ML", "generative AI", "ChatGPT", "Aim", "RAG", "R", "API", "GPT", "GAN"]
|
https://analyticsindiamag.com/ai-news-updates/oracle-announces-genai-capabilities-in-hr-to-boost-productivity/
| 2
| 10
| 1
| true
| true
| false
|
10,081,460
|
India Needs Recommendation Engines with Checks and Balances
|
Gonzalez v. Google and Twitter v. Taamneh were two lawsuits that came up for hearing in the Supreme Court of the United States last week. In essence, the lawsuits accused Twitter and Google of assisting Islamic State attacks. The judgment will prove to be a landmark in deciding whether web providers can be held responsible for hosting unlawful posts, particularly if they encourage it through algorithmic recommendations. However, social media companies in the US are protected from these lawsuits under Article 230, which shields internet service providers and social media platforms like YouTube, Facebook, and Twitter etc, from third-party (user) content, like a hate speech video, that may be in violation of the law. As per Article 230, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.“ Although, a lot of people feel that this rule needs to be revised because the safeguards outlined in Article 230 of the 1996 Communications Decency Act (CDA) were created more than 25 years ago, in an era of ‘naive technological optimism’ and limited technological capabilities. For instance, Mark Zukerberg once told the United States Congress that Facebook would benefit from clearer supervision from elected officials and that it could make sense for there to be accountability for some of the information. In a similar vein, US President Joe Biden told The New York Times during his poll campaign that Article 230 should be “revoked, immediately”. However, Google believes that undercutting the Article will be more of a problem than a solution. As per the statement given by Google to The Verge, “Through the years, YouTube has invested in technology, teams, and policies to identify and remove extremist content. We regularly work with law enforcement, other platforms, and civil society to share intelligence and best practices.” As per Google, “Undercutting Article 230 would make it harder to combat harmful content — making the internet less safe and helpful for all of us.” Where does India stand? India, with respect to the ongoing Article 230 tussle between social media giants and the US government, stands well prepared. India has Section 79 of Indian IT Act, 2000 as the equivalent of Article 230 of the USA. As per Section 79, any social media intermediary will not be subject to legal action for any third-party information, data, or communication link made available or hosted by them. In essence, it means that a social media platform won’t be held accountable for any legal actions if it merely serves as a bridge to transmit messages from one person to another without any kind of interference. However, the Act also states that failing to promptly remove or disable access to unlawful material on that platform without tainting the evidence in any way when the particular social media platform is made aware of any illegal activity occurring over the platform through notification from a government or government agency, can be a ground for legal prosecution. The social media platforms will be required to remove any “misinformation” or illegal content, as well as content that encourages animosity between various groups on the basis of religion or caste with the intention of inciting violence, within 72 hours of being flagged, according to IT minister Rajeev Chandrashekher. Should recommendation engines be regulated? None of these regulations, however, deal with the problem of recommendation systems. According to the recent Gonzalez v. Google lawsuit, the defendant asserts that Google knowingly promoted Islamic State propaganda that allegedly inspired a 2015 attack in Paris, providing material assistance to an unlicensed terrorist organisation. According to recent events, it’s possible that the court may order social media platforms like YouTube to regulate their recommendation systems. But can a regulatory body impose culpability on the recommendation systems used by social media platforms? China, for example, has introduced regulations on recommendation engines under which “algorithm operators will have to update their technology to comply with new technical requirements, from auditing keywords to enabling users access to and control over their personal data profiles.” Additionally, operators are also forbidden from employing algorithms for a variety of illegal actions, such as enacting anti-competitive acts and engaging in pricing discrimination, and will have to modify the direction of their recommendation algorithms in order to adhere to “mainstream norms.” However, the tech ecosystem in China is very distinct from that in the rest of the world. What it is capable of cannot be advanced with nations like India or the US. Governments and tech corporations would need to collaborate to come up with a plan that would allow platforms to identify hazardous content before recommendation algorithms do.
|
Governments and tech corporations would need to collaborate to come up with a plan that would allow platforms to identify hazardous content before recommendation algorithms do
|
["IT Services"]
|
["China", "Facebook", "Google", "recommendation engines", "tiktok", "USA", "YouTube"]
|
Lokesh Choudhary
|
2022-12-05T15:00:00
|
2022
| 778
|
["tiktok", "Go", "AWS", "recommendation engines", "AI", "cloud_platforms:AWS", "programming_languages:R", "recommendation systems", "RAG", "GAN", "USA", "ViT", "Google", "Facebook", "YouTube", "R", "China"]
|
["AI", "RAG", "recommendation systems", "AWS", "R", "Go", "GAN", "ViT", "cloud_platforms:AWS", "programming_languages:R"]
|
https://analyticsindiamag.com/it-services/india-needs-recommendation-engines-with-checks-and-balances/
| 4
| 10
| 0
| false
| true
| false
|
58,550
|
Praxis Business School – Creating Cyber Warriors through their Post Graduate Program in Cyber Security
|
The unique Praxis Cyber Security Program creates industry-ready ‘Cyber Warriors’ – designed by a hardened professional who understands the industry pain-points and knows what it takes to make a successful career in this area. Analytics India Magazine caught up with Tathagata Datta, who is leading the Cyber Security program at Praxis. Tathagata holds an experience of 20 years in the Cyber Security domain with expertise in information security audit, cyber risk assessment, cyber incident handling and digital forensics. He was instrumental in establishing India’s first commercial “Cyber Range” – this is being used by national security agencies to enhance their cybersecurity competency. As a “Cyber Risk Analyst” he assists National Critical Infrastructures to identify their cyber threats and mitigate the risks appropriately. Tathagata has extensive experience in conducting information security audits in India, Europe, Singapore, and the UAE. He is an empanelled information security Lead Auditor at BSI (National Compliance Body of Govt. Of UK) and a member of NASSCOM and Data Security Council of India. He is an Executive Member of IEEE Comsoc. Prior to joining Praxis, Tathagata was the Vice President, Chief Information Security Officer and Head of IT Governance at one of the largest NBFCs of India. Q: Tell us a bit about the Postgraduate Program in Cybersecurity A: Praxis Business School is committed to playing a significant role in creating a strong pool of resources who understand the interplay among data, technology and business and can contribute significantly to the exciting Digital Future. This is the only program in the country that is designed to create industry-ready ‘Cyber Warriors’ by addressing the three aspects of the Cyber Security ecosystem – people, processes and technology – with special emphasis on governance and compliance. The other feature that needs to be highlighted is that we have some of the top Global enterprises supporting us in this initiative – from designing courses, delivering lectures and helping us build a state-of-the-art lab with their tools and technology. Students will get hands-on experience that is unmatched in the education ecosystem. Q: What is the scope for a career in Cybersecurity? A: There is a serious shortage of adequately skilled professionals in the Cyber Security domain – unlike other domains, Cyber Security is not an option, it’s a mandate from governing institutions and firms have no choice but to recruit and build cybersecurity teams. According to PWC reports, the cybersecurity market in India is expected to grow from USD 1.97 billion in 2019 to USD 3.05 billion by 2022, at a compound annual growth rate of 15.6%, 1.5 times the global rate but there is a huge dearth of trained professionals to cater to this market. The popular job portal Indeed reported a spike of 150 per cent in cybersecurity roles between January 2017 and March 2018. As per reports of IBM, India needs a pool of 3 million cybersecurity professionals right now and according to Cybersecurity Ventures, there will be 3.5 million unfilled cybersecurity jobs globally by 2021. Thus not only is there a massive scope for cybersecurity personnel in India, I think as an educational institute that leads in design and delivery of courses for the digital world, but it is also our responsibility to help India address the gap in resources – as this has implications for the country’s security. Q: How is a Praxis qualification rated in today’s business environment? A: Praxis is a pioneer when it comes to teaching next-generation tech skill oriented programs. We started the formal teaching of Data science in the country in 2011. Praxis programs have been well received by the industry and the Data Science program has been consistently ranked as one of the top 3 programs in India by prominent publications. The Post Graduate Program in Cyber security has been ranked 3rd amongst the top 10 cybersecurity programs in the country by Analytics India Magazine. Q: How is the Postgraduate program in Cybersecurity at Praxis structured? A: While there are several short-term certification courses addressing specific job roles, the 9-month full-time program serves the need for a comprehensive course that addresses all the aspects of a cybersecurity ecosystem with a blend of classroom and lab experiences. This will include 525 hours of lectures, lab work, case studies and projects – distributed over 3 trimesters with hands-on experience in state-of-the-art cybersecurity lab. If you are serious about a successful career in cybersecurity, this is the course for you: Trimester 1 (175 Hours) Trimester 2 (175 Hours) Trimester 3 (175 Hours) ● Introduction to Cyber Security ● Network, OS and Database Security ● Security Architecture & VA & PT Mechanism ● Emerging Technologies and Related Security ● Wireless Security ● Machine Learning and Deep Learning ● Malware Analysis and Basics of Digital Forensic ● Incident Management, BCP and Process Approach Q: What specialisations are being offered to make the new PGP at Praxis stand out? A: The new PGP in cybersecurity shall contain specializations in SOC Analysis, Digital Forensic, Security Incident Handling and Information & Cyber Audit. As we speak, India Inc. is struggling big time to find people with in-depth knowledge in these domains. I was a CISO myself a few months back – so I know how difficult it was to find people. I have designed the specializations keeping real industry pain-points in mind. SOC Analysis Digital Forensics Security Incident Handling Information and Cyber Audit Q. You spoke about industry partnerships. I am sure readers would like to know in some detail. A: As a discipline, Cyber Security cannot really be confined to the realms of classroom theoretical teaching – there is absolutely no substitute for hands-on, lab-driven learning. Praxis has forged extensive industry partnerships with CISCO, Fortinet, ISACA (Kolkata Chapter), British Standards Institute (BSI) and Infosec Foundation to make the program relevant and effective. Praxis believes in imparting industry-relevant knowledge by working closely with the industry partners for successful program delivery and desired impact. Praxis and BSI have agreed to jointly run specialized courses on cyber security. Praxis is currently the official delivery partner for BSI in India. “There is a huge skill set gap presently existing in India and the demand for skilled professionals would go up in the years to come. Praxis Business School has introduced this course on Cyber Security, which is industry-relevant and covers all the tools, processes and systems related to cybersecurity. With the course being delivered by industry professionals, the students would gain immense value from insights on the best practices of cybersecurity used in the present industry.” – Mr. Nirupam Sen, Business Head-East Region, BSI Praxis is the authorized training partner for CISCO Networking Academy and will train and certify advanced network & security-related courses. Fortinet has partnered Praxis and students get the opportunity to access Fortinet’s technology in the Praxis Lab. ISACA (Kolkata Chapter) is supporting Praxis Business School’s cybersecurity program as a knowledge partner. Praxis’ cybersecurity programs are supported by Infosec Foundation in the capacity of Industry Interface Partner for campus connect activity, placement support, incubation and mentoring, cross-collaboration and developing cybersecurity products. Q: Who will be in charge of delivering the new Postgraduate program in Cybersecurity at Praxis? A: The program will be delivered by well-recognized industry leaders and senior practitioners who have rich experience in this space in addition to the renowned Praxis faculty team.. Some of these educators and industry professionals are: ● Dr. Prithwis Mukerjee, Director, Praxis Business School | B.Tech (IIT Kharagpur), M.S., Ph.D. (The University of Texas at Dallas) ● Dr. Subhasis Dasgupta – Faculty – Machine Learning, B.Tech(NIT Surat), MBA (IBS), Research Scholar (IIM Ahmedabad), PhD (RK University) ● Charanpreet Singh, Co-founder & Director, Praxis Business School Foundation | B.Tech (IIT Kanpur), MBA (University of Iowa), Chevening Scholar (British Government) ● Tathagata Datta – Director of Cyber Security, Praxis Business School | B.Sc (CU), MCA (WBUT), Executive Member IEEE ComSoc, Former CISO with India’s largest NBFC, Ex-Director National Cyber Range ● A.B. Sengupta – Designated CISO for ERLDC / POSOCO, a Govt. of India Enterprise ● Deb Kumar Roy – Enterprise Security Architect at a Global consulting firm ● Joydeep Bhattacharya – COO at TCG – Digital, ● Koushik Nath – Security Architect, Cisco Systems Q: What are placement opportunities going to be like at Praxis? To start with, the sheer demand for Cyber Security professionals will ensure massive placement opportunities to candidates who have taken the wise decision to invest 9 months in this domain. To add to this, Praxis Business School has had a successful track record of placing candidates in top-notch organizations through their placement program. Till now Praxis has placed 29 batches of students with a consistent 95% + placement record. The Praxis Placement Program will manage the participant’s transition to a promising career in the exciting Cyber Security world. The program is a structured process committed to creating quality placement opportunities for all enrolled students of full-time programs in PGDM, Data Sciences and Cyber Security. You can read about the last 2 day zero placements at Praxis here: June 2019 , Nov 2019 Q: In summary, give me three things that make the Praxis Program the best in this field. I would say: The course design and delivery, with its emphasis on all three aspects that define this domain – people, process and technology, and the 4 specializations, and the high-quality faculty team we have in place. The lab, created with top-of-the-line equipment and significant industry support – we will have a full-stack lab with red team capability – this will add immense value to the learning outcomes.The Praxis Placement Program with its impeccable record of finding excellent opportunities for its students across all programs. Q: What is your advice to youngsters unsure of making a decision about their careers in these uncertain and rapidly changing times? I have been a Cyber Security professional for as long as I can remember. There are a few things that make this domain incredibly exciting: The challenge: As a Cyber Security professional, you are always under the threat of a malfunction, a hack, an attack – and the high of meeting and overcoming these challenges to secure your company’s or country’s data and keep the business going is something you will not find in any other profession.The learning – by its very nature the Cyber Security field offers a life-long journey of learning. Technologies and the nature of threats will evolve – and you have to evolve in tandem to ensure that you don’t get left behind. The rewards – India and the world are struggling to find well-trained Cyber Security all-rounders who go far beyond hacking and are able to architect an ecosystem capable of handling cyber threats. If you are one such resource, your growth, in terms of both money and respect, is going to be phenomenal. Thus, my advice is – if you like solving problems, if you understand that cybersecurity is a hyper-essential part of this whole digital world, and if you are passionate about making a contribution, you are welcome to Praxis and to this wonderful world of Cyber Security. You don’t need to be a mathematician or a tech geek – you need to be a structured thinker, a problem solver and a team-player to succeed. “Praxis has this capability to segregate individual’s needs, interests and business model and convert them to effective training suggestions. As a result, professionals get very much involved and get fruitful information and enlighten themselves through this training course. I definitely recommend Praxis.” – Mr. A.B. Sengupta, CISO of ERLDC / POSOCO (A Govt. of India Enterprise)
|
The unique Praxis Cyber Security Program creates industry-ready ‘Cyber Warriors’ – designed by a hardened professional who understands the industry pain-points and knows what it takes to make a successful career in this area. Analytics India Magazine caught up with Tathagata Datta, who is leading the Cyber Security program at Praxis. Tathagata holds an experience […]
|
["AI Trends"]
|
["Cyber Security", "cybersecurity career", "Cybersecurity India", "cybersecurity professionals", "mba in business analytics", "pgp program in data science", "Praxis Business School"]
|
Ambika Choudhury
|
2020-03-13T12:00:00
|
2020
| 1,926
|
["data science", "cybersecurity professionals", "Go", "Cyber Security", "machine learning", "API", "AI", "cybersecurity career", "Git", "mba in business analytics", "RAG", "pgp program in data science", "Cybersecurity India", "deep learning", "analytics", "R", "Praxis Business School"]
|
["AI", "machine learning", "deep learning", "data science", "analytics", "RAG", "R", "Go", "Git", "API"]
|
https://analyticsindiamag.com/ai-trends/praxis-business-school-creating-cyber-warriors-through-their-post-graduate-program-in-cyber-security/
| 4
| 10
| 5
| true
| false
| true
|
54,577
|
Exciting AI Researches To Look Up To In 2020
|
The last decade has been exciting for artificial intelligence. It has gone from esoteric to mainstream very quickly, thanks to ingenious work by researchers, the democratisation of technologies by top companies and, of course, enhanced hardware. In this article, we will list down a few potential research areas that we want to, or at least hope to, see more of in the year 2020. Explainable AI via Alejandro Barredo Arrieta et al., In the chart above, one can see how the number of publications with XAI as the keyword has seen a rise over the past five years. Explainable AI refers to methods and techniques in the application of artificial intelligence such that the results of the solution can be understood by human experts. The need for transparency could be seen in the increased interest of the researchers. Loss Change Allocation by Uber They call this loss change allocation(LCA). LCA allocates changes in loss over individual parameters, thereby measuring how much each parameter learns. Questioning the AI by IBM By interviewing 20 UX and design practitioners working on various AI products, researchers tried to identify gaps between the current XAI algorithmic work and practices to create explainable AI products. Causality Causality is the degree to which one can rule out plausible alternative explanations. By defining causality in systems, one gets to ask or even answer why one needs or doesn’t need a certain feature in a model. Researchers like Judea Pearl insist that machine learning has experienced unexpected success without paying attention to fundamental theoretical impediments. Here are a few interesting research that is being done to address causal inference in machine learning: Causality for Machine Learning by Bernhard Schölkopf explains how the field is beginning to understand them. DeepMind’s Causal Bayesian Networks demonstrated the use of Causal Bayesian networks(CBNs) to visualise and quantify the degree of unfairness. Adversarial Learning of Causal Graphs aims at recovering full causal models from continuous observational data along a multivariate non-parametric setting. Meta-Learning The concept of meta-learning dates back to at least 3 decades. It was popularised by AI pioneer Juergen Schmidhuber and was also a part of his diploma thesis in 1987. Today, it is one of the most spoken concepts in the machine learning community. And, 2020 looks like a potential year for more research in this domain. Meta-learning in short, as Schmidhuber defines it, is to learn credit assigning method itself through self-modifying code. The applications are not limited only to semi-supervised tasks but can be taken advantage of in tasks such as item recommendation, density estimation, and reinforcement learning tasks. A recent work on visual concept meta-learning by MIT was one such example of meta-learning where the researchers were able to successfully categorise objects with multiple combinations of visual attributes and with limited training data. They also made the model predict relations between unseen pairs of concepts. This work presented a systematic evaluation on both synthetic and real-world images, with a focus on learning efficiency and strong generalisation. Meta-learning is the hottest space to watch out for in 2020 as researchers are racing to bring intelligence for trivial tasks in machines. Federated Learning Decentralising the learning process makes the machine learning algorithms more robust and quicker. Federated Learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on the device, decoupling the ability to do machine learning from the need to store the data in the cloud. For instance, Google uses the same approach for digital zooming on its flagship phones. It deploys state-of-the-art algorithms to paint the picture with some meaning and federated learning pushes the boundary further by sharing the information across the devices with sophisticated anonymity. Few exciting recent works on federated learning: Efficient Federated Learning on Edge Devices A challenge in federated learning is that the devices usually have much lower computational power and communication bandwidth than server machines in data centres. To overcome this challenge, a method is proposed that integrates model pruning with federated learning in this paper. Exploiting Unlabeled Data in Smart Cities using Federated Learning This work introduces a semi-supervised federated learning method called FedSem that exploits unlabeled data. Reinforcement Learning Reinforcement learning occupies a vast section of AI. Most happening news from the AI community has often been around the research on agents, rewards systems and understanding how to make machines to teach themselves to be taught. Here are a few interesting works that indicate there is more to come from in the near future: Automating Reward Design In an attempt to automate the reward design, the Robotics department at Google introduced AutoRL, that automates RL reward design by using evolutionary optimisation over a given objective. Reward Tampering Reward tampering is any type of agent behaviour that instead changes the objective to match reality. In an attempt to acknowledge the consequences of reward tampering and provide a solution to the same, researchers at DeepMind released a report discussing various aspects of reinforcement learning algorithms. Reinforcement Learning Without Rewards This work shows that agents could use counterfactuals to develop a form of ‘empathy’, among agents. Reinforcement Learning for Recommender Systems In an attempt to make better decisions and recommendations, ML developers from Google, in this work, merged reinforcement learning and recommender systems. Compression Techniques Larger the neural networks, higher are the computational costs for some real-time applications such as online learning and incremental learning. Here are a few exciting works related to compression approaches: Deep Neural Network Compression with Single and Multiple Level Quantisation In this paper, the authors propose two novel network quantisation approaches single-level network quantisation (SLQ) for high-bit quantisation and multi-level network quantisation (MLQ). Efficient Neural Network Compression In this paper, the authors proposed an efficient method for obtaining the rank configuration of the whole network. AI Ethics, Regulations And Privacy Preservation The year 2019 has witnessed the dark side of algorithms when GANs and OpenGPT2 were used to generate fake images and fake text respectively. These two applications had people wondering about the ethics of AI implementation and opened up research into how organisations can fight the ill effects of AI. Facebook has also launched a million-dollar DeepFake detection challenge on Kaggle to thwart DeepFakes. Not only deep fakes but algorithmic exploitation can even happen on e-commerce sites where privacy could be at stake. The race to enhance algorithms by acquiring vast amounts of data can compromise the privacy of the individuals. The European Union’s General Data Protection Regulation (GDPR), which went into effect in 2018, insists on having high-level data protection for consumers and harmonises data security regulations within the European Union. So, this year, there is a high chance of organisations moving from voicing opinions to actually building tools that would ensure privacy without affecting the efficiency of the algorithms. The domains listed above cover the most talked about topics at flagship conferences and forums. However, in AI, there is always a transfer of techniques across domains. For instance, there is great potential at the convergence of causality and reinforcement learning. AutoML is another exciting avenue which has already picked up pace last year. Many tools too have been released to ease the deployment of machine learning models. The year 2020 can be the year, which will set new benchmarks for AI approaches that is smarter, safer and more trustworthy.
|
The last decade has been exciting for artificial intelligence. It has gone from esoteric to mainstream very quickly, thanks to ingenious work by researchers, the democratisation of technologies by top companies and, of course, enhanced hardware. In this article, we will list down a few potential research areas that we want to, or at least […]
|
["AI Trends"]
|
["AI & ethics", "ai publications by india", "federated learning"]
|
Ram Sagar
|
2020-01-24T12:52:32
|
2020
| 1,221
|
["federated learning", "Go", "artificial intelligence", "machine learning", "AI", "AI & ethics", "neural network", "ML", "R", "Aim", "xAI", "ai publications by india"]
|
["AI", "artificial intelligence", "machine learning", "ML", "neural network", "xAI", "Aim", "federated learning", "R", "Go"]
|
https://analyticsindiamag.com/ai-trends/ai-2020-meta-learning-auto-ml-federated-learning/
| 3
| 10
| 0
| false
| true
| true
|
10,101,607
|
PyTorch Edge Introduces ExecuTorch Enabling On-Device Inference
|
PyTorch Edge recently introduced ExecuTorch, a solution enabling on-device inference capabilities across mobile and edge devices. With strategic backing from industry giants like Arm, Apple, and Qualcomm Innovation Center, PyTorch Edge is set to redefine the future of on-device AI deployment. ExecuTorch addresses the longstanding challenge of fragmentation within the on-device AI ecosystem. It offers a well-crafted design that seamlessly integrates third-party solutions, allowing for accelerated machine learning model execution on specialized hardware. PyTorch Edge’s partners have contributed custom delegate implementations, optimizing model inference execution on their respective hardware platforms. Key components of ExecuTorch include a compact runtime with a lightweight operator registry, covering a diverse range of PyTorch models. This streamlined approach facilitates the execution of PyTorch programs on various edge devices, from mobile phones to embedded hardware. ExecuTorch also ships with a Software Developer Kit (SDK) and toolchain, providing ML developers with an intuitive user experience for model authoring, training, and device delegation, all within a single PyTorch workflow. This suite of tools empowers developers with on-device model profiling and enhanced debugging capabilities. One of ExecuTorch’s distinguishing features is its portability. It is compatible with a wide array of computing platforms, from high-end mobile phones to constrained embedded systems and microcontrollers. Moreover, it enhances developer productivity by streamlining the entire process, from model authoring and conversion to debugging and deployment. With PyTorch Edge, ML engineers can seamlessly deploy a variety of ML models, including those for vision, speech, NLP, translation, ranking, integrity, and content creation tasks, to edge devices. This aligns perfectly with the increasing demand for on-device solutions in domains such as Augmented Reality, Virtual Reality, Mobile, IoT, and more. PyTorch Edge’s framework ensures portability of core components, catering to devices with diverse hardware configurations. Its custom optimizations for specific use-cases coupled with well-defined entry points and tools create a vibrant ecosystem, making it the future of the on-device AI stack. With the launch of ExecuTorch, PyTorch Edge is poised to transform the landscape of on-device AI deployment. The community eagerly anticipates the innovative applications that will emerge from ExecuTorch’s on-device inference capabilities across mobile and edge devices, bolstered by the support of its industry partner delegates.
|
It’s backed by industry giants like Arm, Apple, and Qualcomm Innovation Center
|
["AI News"]
|
[]
|
Siddharth Jindal
|
2023-10-18T11:13:34
|
2023
| 360
|
["machine learning", "AI", "PyTorch", "innovation", "ML", "RAG", "NLP", "Ray", "ViT", "R"]
|
["AI", "machine learning", "ML", "NLP", "Ray", "PyTorch", "RAG", "R", "ViT", "innovation"]
|
https://analyticsindiamag.com/ai-news-updates/pytorch-edge-introduces-executorch-enabling-on-device-inference/
| 3
| 10
| 1
| false
| false
| false
|
10,142,622
|
Human Clone Has Arrived!
|
Clone, a humanoid robotics startup from Poland announced the early release of its latest creation, a highly advanced humanoid robot designed for service and hospitality roles. Clone will manufacture 279 units of the limited edition Clone Alpha.The home-use androids will come with some pre-installed skills and be equipped with the 'Telekinesis' training platform, allowing you to teach the robot new skills. Pre-orders open in 2025. https://t.co/P8pg7LLfpf pic.twitter.com/wOjTvMj4KL— The Humanoid Hub (@TheHumanoidHub) December 5, 2024 These Androids will come with a list of pre-installed skills and be equipped with the ‘Telekinesis’ training platform, which allows the user to teach the robot new skills. The company announced the pre-orders to be opened in 2025 on its official website, saying, “Reserve one of the first 279 Clones ever made, in its Alpha Edition.” Earlier, AIM also discussed the stream of humanoids being developed by every major tech company. While others may be looking at large-scale industrial uses, for instance, in warehouses, the utility of FigureAI robots seems like a companion. Clone’s previous developments include Torso, a bimanual android actuated with artificial muscles, and Clone Hand. While this is an achievement for the field of robotics, some are also expressing doubts about the product’s delivery. Organ System and Cybernetic Intelligence The Clone integrates advanced artificial muscle, skeletal, nervous, and vascular systems for unparalleled human-like functionality. Its ‘Myofiber’ technology, developed in 2021, actuates natural animal skeletons by attaching musculotendon units to bones. These monolithic units prevent tendon failures and deliver a muscle response in under 50 ms, with over 30% contraction and 1 kg of force per 3-gram muscle fibre. Myofiber is the only artificial muscle capable of this combination of speed, power density, and efficiency. The skeletal system replicates the human body with 206 bones, featuring fully articulated joints, artificial ligaments, and connective tissues. The design includes 1:1 ligament and tendon placement for a highly flexible structure. The shoulder has 20 degrees of freedom (DoF), the spine adds 6 per vertebra, and the hand, wrist, and elbow provide 26 degrees, totalling 164 DoF in the upper torso alone. In comparison, Tesla’s Optimus robot displayed 22 DoF in the hand movement capabilities. Clone is also made from durable polymers making it both cheap and robust. The nervous system enables instantaneous muscle control via proprioceptive and visual feedback. Equipped with four depth cameras, 70 inertial sensors, and 320 pressure sensors, it ensures precise movement and force feedback. Microcontrollers relay data to the NVIDIA Jetson Thor GPU running Cybernet, enabling advanced visuomotor coordination. The vascular system features a compact 500-watt electric pump supplying hydraulic pressure to the muscles at 100 psi. The ‘Aquajet valve’ technology uses minimal power (under 1 watt) to deliver efficient water pressure to actuate the muscles. Lastly, Clone Alpha integrates natural language interfaces, allowing communication in plain English, marking a new era in human-computer interaction through advanced AI models.
|
Clone startup releases its limited edition Clone Alpha, equipped with ‘Telekinesis’.
|
["AI News"]
|
["humanoid"]
|
Sanjana Gupta
|
2024-12-06T16:37:27
|
2024
| 476
|
["Replicate", "startup", "programming_languages:R", "AI", "Aim", "ai_applications:robotics", "GAN", "R", "humanoid"]
|
["AI", "Aim", "R", "GAN", "startup", "Replicate", "programming_languages:R", "ai_applications:robotics"]
|
https://analyticsindiamag.com/ai-news-updates/human-clone-has-arrived/
| 3
| 8
| 1
| true
| false
| false
|
68,409
|
Google Adds New Privacy Testing Module In TensorFlow
|
Google introduced a new privacy testing library in TensorFlow to empower developers to analyse the privacy properties of classification models. This will become a part of TensorFlow Privacy, which was introduced in 2019 to enable privacy within AI models. Today, awareness about privacy among people is more than ever, and it is only growing as companies are coming under the scanner of experts as to how organisations are collecting and processing users’ data. Such circumstances forced governments from across the world to devise privacy protection laws such as GDPR, PDP and CCPA. Consequently, organisations have become critical of their AI models’ outcomes. One of the biggest challenges for companies while maintaining privacy is to avoid the leakage of information from AI models. In an attempt to mitigate such hurdles, Google introduced differential privacy, which adds noise to hide individual examples in the training dataset. However, according to the researchers of Google, it was designed for academic worst-case scenarios and can significantly affect model accuracy. However, researchers from Cornell University started experimenting with various approaches to ensure privacy with ML models and came up with membership inference attacks. Membership Inference Attack With TensorFlow According to Google’s researchers, membership inference attack is a cost-effective methodology that predicts whether a specific piece of data was used during training. Membership inference attack technique has seen a wide range of applications in recent years, especially in the privacy domain. In April 2020, membership inference attack was an inspiration to the work by the University of Edinburgh and Alan Turing Institute to identify if a model can forget the data to ensure privacy. After using the membership inference tests internally, researchers from Google have now released the support of the technique as a library with TensorFlow. One of the most significant advantages of membership inference attack is its simplicity that does not require any re-training, thereby avoiding the disruption in developers workflows. The researchers performed a test of membership inference attack on models of CIFAR10 (Canadian Institute For Advanced Research) — an object classification dataset. The dataset contains 60,000 32×32 colour images in 10 different classes representing aeroplanes, car, birds, trucks, among others. “The test produced the vulnerability score that determines whether the model leaks information from the training set. We found that this vulnerability score often decreases with heuristics such as early stopping or using DP-SGD for training,” researchers from Google wrote on the TensorFlow blog. How Will It Help Determining whether a data set was present in the training models will allow developers to check if their models are able to preserve privacy before deploying in production. The researchers believe that with membership inference attack feature in TensorFlow, data scientists would explore better architecture choice for their models and use regularisation techniques such as early stopping, dropout, weight decay, and input augmentation. In addition, the researchers also hope that membership inference attack will become the starting point for the community to strive towards introducing new architectures that can fortify the leaks, and in turn, preserve privacy. Currently, membership inference attack is only limited to classifiers, and in future, the researchers would further extend its capabilities to assist developers in leveraging the membership inference attack with other data science techniques. Outlook Privacy is gradually becoming the core of any machine learning models as it has drawn concerns from around the world. Although a different approach, Julia Computing, in late 2019, demonstrated training ML models with homomorphic encryption for privacy. Besides, PyTorch introduced CRYPTEN for ensuring privacy while processing data with homomorphic encryption. However, with membership inference attack, TensorFlow opened up new possibilities for developers to better examine their ML models and bring trust among users.
|
Google introduced a new privacy testing library in TensorFlow to empower developers to analyse the privacy properties of classification models. This will become a part of TensorFlow Privacy, which was introduced in 2019 to enable privacy within AI models. Today, awareness about privacy among people is more than ever, and it is only growing as […]
|
["Global Tech"]
|
["Tensorflow"]
|
Rohit Yadav
|
2020-06-28T10:00:00
|
2020
| 610
|
["data science", "machine learning", "AWS", "AI", "PyTorch", "ML", "homomorphic encryption", "RAG", "differential privacy", "TensorFlow", "Tensorflow"]
|
["AI", "machine learning", "ML", "data science", "TensorFlow", "PyTorch", "differential privacy", "homomorphic encryption", "RAG", "AWS"]
|
https://analyticsindiamag.com/global-tech/google-adds-new-privacy-testing-module-in-tensorflow/
| 3
| 10
| 1
| false
| true
| true
|
10,120,014
|
Isomorphic Labs Has the Potential to Build Multi-$100 Bn Business
|
Google DeepMind’s co-founder and chief executive officer, Demis Hassabis, in a recent interview, said that its sister company, Isomorphic Labs, has the potential to build a business worth hundreds of billions of dollars. “I hope to achieve both (commercial success and societal benefits) with Isomorphic and build a multi-100 billion dollar business. I think it has that potential,” said Hassabis without delving into the specific timeline. The Alphabet-backed medtech lab, along with DeepMind, released AlphaFold 3 yesterday. The protein folding model predicts with 50% better accuracy. “Well, if you ask me the number one thing AI can do for humanity, it will be to solve hundreds of terrible diseases. I can’t imagine a better use case for AI. So that’s partly the motivation behind Isomorphic and AlphaFold and all the work we do in sciences,” said Hassabis. He believes that “revolutionising the drug discovery process to make it ten times faster” and more efficient and increasing the likelihood of passing clinical trials through better property prediction offers plenty of commercial value. Future Vision Looking towards the end of the year, Hassabis said that Google DeepMind will combine the agent systems it developed for gaming with multimodal systems into large general models that plan and achieve goals. “So systems that are able not only to just answer questions for you, but actually plan and act in the world and solve goals and I think those are the things that will make these systems sort of the next level of usefulness in terms of being a useful everyday assistant,” he said. He further said that AI-designed drugs would probably be available in the ‘next couple of Years’ Brains Behind Isomorphic Founded in 2021 by Hassabis, who also had a significant role even in DeepMind’s inception, Isomorphic endeavours to use AI in drug discovery and research on severe human diseases. The brains behind Isomorphic include tech veteran Miles Congreve, serving as chief scientific officer, who contributed to the design of 20 clinical-stage drugs and co-invented Kisqali (Ribociclib), a marketed breast cancer treatment. Also noteworthy is Sergei Yakneen‘s contribution, who is the chief technology officer with over two decades of expertise in engineering, machine learning, product development, and life sciences and medicine research. The company recently announced key partnerships with two of the world’s largest pharmaceutical companies — Eli Lilly & Co. and Novartis AG. The deals are said to have a combined value of close to $3 billion.
|
AI-designed drugs would probably be available in ‘Next Couple of Years’, said Google DeepMind chief Demis Hassabis.
|
["Deep Tech"]
|
[]
|
Shritama Saha
|
2024-05-09T22:41:32
|
2024
| 405
|
["Go", "machine learning", "programming_languages:R", "Modal", "AI", "programming_languages:Go", "R"]
|
["AI", "machine learning", "R", "Go", "Modal", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/deep-tech/isomorphic-labs-has-the-potential-to-build-multi-100-bn-business/
| 2
| 7
| 3
| false
| false
| false
|
10,082,950
|
10 Gifts For Your Tech Bros
|
It’s that time of the year again when we love to spend time with family and friends, buy gifts for them and show our love and care to them. Treat your beloved coder to one of these awesome gifts. It will definitely make them as happy as they are when a programme runs without a bug or error! Mouse Jiggler This Jiggler mouse should be on your list to give a special gift. It keeps the computer active by stimulating the cursor on your screen while you are away. What could be better than a stand-alone item requiring no software, extra USB ports and no external power source? Simple and easy to use by placing your mouse on top of the device. Follow the link to buy. Online courses Since many developers learn everything they know on the internet anyway, why not help them by gifting them an online course? Many of them are self-paced, and they can cover all of the basics in detail, which could set up your new programmer friend for success. Check out our recommendations for data engineers as well as AI enthusiasts! Books Like gadgets and music, techies also like books a lot, but there needs to be clarity with the book. Which book should you buy for them? A novel, a programming book, or something else. There are a lot of choices available, so here are the top AI books released in 2022 you should consider gifting your tech pal! Mechanical gaming keyboard Every programmer has felt the pain of spending long hours on the computer in their wrists, arms, or shoulders. An ergo keyboard gives comfort over a long period when using a keyboard. Your search for the perfect keyboard ends here. Choose from the best options available in India to gift your coder friend this year! Follow the link to the list. Coffee mug There’s more to life than eating, sleeping, and coding. That last necessity is coffee; a programmer will chug it every morning. Piping hot coffee paired with all-nighter coding is inevitable for techies. So, a geeky coffee mug is always the right choice for a developer. You could even throw in a floppy disk coaster for the finishing touch! Noise cancelling earphones Distractions are the enemy of productivity. Noise-cancelling headphones block distractions and allow coders to listen to focus-enhancing music. There are tons of noise-cancelling headphones on the market, with levels that let one decide whether to block all sound or allow ambient sounds. Which one is the best option? Find out here. VR headset The possibility is that a VR headset has been on your techie friend’s list for a long, long time. So, grab the opportunity and fulfil their wish this festive season! Pick from the plenty of affordable and top-rated headsets available on the market today! Check out some of the best VR headsets here. Rubber duck A recommended peculiar debugging method that every coder knows of is to explain code line-by-line to a rubber duck. The idea caught on, if only tongue-in-cheek, so if your coding friend doesn’t already have a rubber duck, buy them one! Internal SSD An SSD is an excellent tool for computers and laptops for private and geeky use. The high-tech device allows users to control their devices. Programmers are always in search of such amazing products! It is a tiny, proficient tool that empowers coders to learn and store amazing computing. It is an ideal gift for coders who work from home, always travel to the client location and other unusual spots or remote countries. Choose from some of the best SSDs here. Gift bundle Give the gift of new tech accessories all bundled up! The bundle includes SURGE™ PD 33W GaN Tech Dual Port Charger, DailyObjects SURGE™ 4-in-1 Universal Braided Charging Cable, Marshal Mini Tech Kit Organiser and Scented Soy Pillar Candle. Check out the TechMate Gift Bundle here.
|
Are you looking for last-minute gift ideas?
|
["AI Trends"]
|
[]
|
Tasmia Ansari
|
2022-12-21T16:00:00
|
2022
| 650
|
["Go", "ELT", "programming_languages:R", "AI", "programming_languages:Go", "ViT", "GAN", "R"]
|
["AI", "R", "Go", "ELT", "GAN", "ViT", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-trends/10-gifts-for-your-tech-bros/
| 2
| 8
| 1
| false
| false
| false
|
56,584
|
Top 10 Free Ebooks To Learn Data Science
|
Data science is one collective term that is on everyone’s mouth these days, with its applications now being used across big companies, research institutes, and college projects. Since data science is utilised in every sector these days, it is crucial to have a sound knowledge of this vast subject. Although a wide range of information can be found on any search engine, the wiser step is to read materials that have been carefully penned down by experts from the field and are available in the form of e-Books. In this article, we have composed a list of ten e-Books for beginners that will provide adequate knowledge with regard to data science and big data. Neural Networks and Deep Learning Via RidgeWood Authored by Micheal Nielson, the book is available for free and covers various programming paradigms. The book helps a reader to build neural networks in order to recognize handwritten digits and other use cases. Furthermore, it also lets a reader venture into the space of Deep Learning. Download the book here. Think Bayes Via Amazon When it comes to Data Science, Bayesian Statistics is an important chapter which cannot be avoided at any cost. Penned down by Allen B. Downey, this book makes Bayesian statistics simple to understand for a reader. The book uses Python codes instead of mathematics to keep the readers engaged. Due to this reason, it is advisable to have a decent knowledge of Python before turning through the pages. Download the book here. Statistical Learning with Sparsity: The Lasso and Generalizations Via Standford.edu Since there has been a wide flow of data in every industry ranging from medical to sports, this book allows a reader to go through a conceptual framework related to ideas about data and data science. The book covers a wide variety of topics such as Algorithms, Multiclass Logistics Regression and Generalized Linear Models. Download the book here. The Field Guide of Data Science Via Wolfpaulus ‘The Field Guide to Data Science’ has played a crucial role in government and commercial organisations by defining the ideal use of data science. From core concepts of Data Science to going deep into Machine Learning, the book has been put together so that organisations can understand how to use the available data as a resource. With over 15,000+ downloads and available for free, the book contains a number of case studies to highlight its diverse role in multiple scenarios by several organisations. Download the book here. The White Book of Data Via Fujitsu When it comes to running a business with the help of big data, this book by Fujitsu sheds appropriate light on the hot trending topic. The book takes a reader through the definition of big data, the prevailing challenges existing in the big data space and the approaches in business. It provides guidelines for business analytics and helps in managing business operations. The book educates a reader about clearing hurdles in Big Data to the future and final word on Big Data. Download the book here. Machine Learning Via Wiley Jason Bell has laid down his expertise on the pages of this book, which focuses on helping developers and technical professionals. The book covers a wide variety of topics such as the history of machine learning, their uses and the languages used in machine learning. It also carries a dedicated chapter for decision trees and artificial neural networks. Download the book here. Beginners Guide to Analytics Via Jigsaw Academy The book is a perfect choice for those who are entering the data analytics space. It provides an array of applications in analytics, ranging from sports to retail. The user is shown new doors to different paid and free tools that are used in the analytics space. Furthermore, the book precisely showcases the future prospect of data analytics, which is vital information for a beginner to know. Through the book, a reader can learn a thing or two about the future of Analytics and the careers related to Analytics. Download the book here. Data Science: Theories, Models, Algorithms, and Analytics Via Librarything In 462 pages, this book provides a bucket full of information regarding Data Science. The book covers a wide variety of sections by giving access to theories, data science algorithms, tools and analytics. Some highlighting contents of the book are Open Source: Modelling in R to Bayes Theorem. Download the book here. Automating Boring Stuff with Python Via Elektor Made for those who love doing practical things, this book teaches everything in a practical way to make learning easy and engaging. The book teaches how to apprehend XLS without using algorithms. Highlighting contents of the book are Automate Trivial using Python and Scraping Data on Web. Download the book here. An Introduction to Statistical Learning Via Amazon Four authors with years of expertise penned down this book for upper-level graduate students to help them enrich their understanding of Statistical Learning. The book contains a number of R Code Labs which could be of great value for a young data scientist who has just entered this wide universe. Download the book here.
|
Data science is one collective term that is on everyone’s mouth these days, with its applications now being used across big companies, research institutes, and college projects. Since data science is utilised in every sector these days, it is crucial to have a sound knowledge of this vast subject. Although a wide range of information […]
|
["AI Trends"]
|
["Data Science", "Learn Data Science"]
|
Rohit Chatterjee
|
2020-02-13T14:00:00
|
2020
| 850
|
["data science", "Go", "machine learning", "AI", "neural network", "Python", "Ray", "deep learning", "analytics", "Learn Data Science", "Data Science", "R"]
|
["AI", "machine learning", "deep learning", "neural network", "data science", "analytics", "Ray", "Python", "R", "Go"]
|
https://analyticsindiamag.com/ai-trends/top-10-free-ebooks-to-learn-data-science/
| 3
| 10
| 3
| true
| true
| false
|
10,073,825
|
Behind Indian Government Supported AI & Robotics Innovation Firm
|
“I think India is a supermarket of problems,” but we also have the potential to solve all these problems, Mr. Umakant Soni, CEO, AI & Robotics Technology Park (ARTPARK), said when asked about India’s ability to become a leader in AI. “We have all kinds of problems, if we look at transportation there is a problem, if we look at the weather, there is a problem. We look at roads, there is a problem, So, for AI, the problem data is actually very critical.” Mr. Umakant Soni, who is an alumni of IIT Kanpur, with his numerous years of experience in the industry behind him, is working towards making India a leader in the field of AI. Besides developing some cool robots, ARTPARK has also undertaken various other projects. In an exclusive interview with Analytics India Magazine, Mr. Umakant Soni shares the vision for ARTPARKand some of the different projects they have undertaken. ARTPARK “We created ARTPARK to create breakthrough AI and robotics technology companies, which can impact 2 billion plus population by 2030.” ARTPARK, which Mr. Umakant Soni, also co-founded, is a non-profit backed Govt of Karnataka and Dept of Science & Technology, Govt of India, under the Indian Institute of Science (IISc). With seed funding of INR 170 crores from the Department of Science Technology, Govt. of India, under the National Mission on Interdisciplinary Cyber-Physical Systems (NM-ICPS) and INR 60 crore grant from Govt. of Karnataka, ARTPARK wants to create and boost the university research ecosystem in India. Mr. Umakant Soni, who is also an advisor for NITI Aayog, believes there is good enough research happening in universities in India; however, they often find it difficult to actually translate that research into useful products or companies that can scale. “When I was at IIT Kanpur, we were trying to start a startup based on my research but we couldn’t do that, because there was no ecosystem at all.” However, a lot has changed since then. “With each IITs and IIITs, we’ve created hubs and connected them together. So this is a massive research ecosystem that India has created. “So we feel if the talent can be made to work on the research that’s coming out, combined with the entrepreneurial talent, you know, we could actually have great companies coming out.” “We want to support this university ecosystem through enough grant money from the government of India and combine it with entrepreneurial talent and the VC ecosystem. Now we’ve recently created a USD 100 million venture fund to support the ecosystem.” Robots developed by ARTPARK XraySetu “We see that if we really want to enable health care at scale, people have to be kept really healthy outside of the hospitals. This is very critical. “By 2030, it is estimated that 80 per cent of the healthcare is going to be outside of the hospital.” Amid the Covid-19 pandemic, researchers at ARTPARK developed XraySetu, which is an AI model developed in collaboration with HealthTech startup Niramai Health Analytix. How it works is you take a picture of an X-ray on your phone, send it across through a chatbot on WhatsApp, and you will get a report about your lung health in less than five minutes. (Source: xraysetu.com) “When we first released it, it was being used by a few doctors in Uttar Karnataka. But what surprised us was that soon more than 10,000 doctors and technicians were using XraySety across India. Not only that, when we looked at the logs, we realised that 20 plus countries actually used it outside of India.” Now, ARTPARk is in the process of turning it into a company. Gold standard datasets “Then another thing that we’re trying to do in healthcare is creating gold standard datasets.” Most AI companies use the funding to create good datasets to train their AI algorithms, and this absorbs a lot of the funds they have raised. “But what if you could actually create these gold standard datasets and offer it to people to use right so that they don’t have to do everything from scratch.” To create these gold standard datasets, ARTPARK is working closely with the Government of India and also with the private sector. “First of all, we are starting with cancer. Oral cancer is a big challenge in the northern side of India. Cervical cancer, too among women, is a concern. We feel that if we can start there and progressively go into more and more diseases, it could be a big, big game changer.” Gold standard language datasets Another area where ARTPARK is trying to create gold standard datasets is in Indic languages. “We feel that the biggest beauty of India is the diversity in languages. We are working across more than 20 Indic languages.” Recently, Prime Minister Narendra Modi launched Project Bhashini to help deliver web content in different Indic languages. The language datasets developed by ARTPARK will be significant in achieving the goals of Project Bhashini. “We need to figure out a way to make the local dialect more appreciable to the machine that you’re just trying to understand. And maybe that is where the true AI will actually come out.” Project Eklavya By leveraging AI and robotics, we can unlock human potential to the next level, Mr. Umakant Soni said. “Another big area that we are focusing on is education and learning because when we have spent billions of dollars in machine learning, we haven’t really looked at it with that same care and concern around human learning.” “Today, AI algorithms are learning to beat the Grandmasters in chess in just seven to eight hours. That is scary because if you’re not elevating human learning to the next level, then we potentially are setting up a very tricky situation where humans will not be able to differentiate themselves with respect to machines.” Taking the example of Portuguese footballer Cristiano Ronaldo, Mr. Umakant Soni said that he was mesmerised by the way he moved to head a goal in a football match. “And here we are, we’re trying to work on this robotic dog, and, of course, it’s working, but the fluidity with which these athletes move is actually remarkable, and that makes you realise the extent of human intelligence.” “So in some sense, in trying to create artificial intelligence, we are learning to appreciate human intelligence.” However, one of the biggest challenges today is that our education system does not appreciate human intelligence. “We’ve been working on this experiential learning and looking at how we can change our schools. How can we change our colleges? How can we create the best environment for these human brains to reach their potential, which is unlimited. In this regard, ARTPARK is working with the Government of Karnataka and Aalto University, Finland, among other parties. Becoming a leader in AI Last year, when former NITI Aayog CEO Amit Kant said that India is well-positioned to become a global leader in AI, it made us wonder if India can compete with the likes of China and the US when it comes to AI. Mr. Umakant Soni thinks we can. India possesses the right resources to achieve these goals. We have the right source of talent who can propel India towards becoming a leader in AI. “I think we are there in terms of talent as well. I completely agree. Around 11 per cent of top AI researchers are actually, you know, either born in India, or they are Indian-origin people, so we do have cutting-edge talent. Now with this whole NMI CPS mission, I think we’ve been producing more talent,” Mr. Umakant Soni said. Now, the challenge is to use these resources to their full potential and develop an ecosystem where AI development can progress flawlessly. “I think with the NMI-CPS mission, we are halfway there with the national AI mission also coming into play. It’s a billion dollars of investment in AI research and innovation, and this could really propel AI development in India.” “So if you ask me by 2030, I see that few of these societal scale AI systems will be in play, and most of the governments will be trying to leverage AI to run the complex governing mechanism. So would India be playing your role in that problem? As a leader, I would say there’s a very high probability that we could be a potential leader.” In fact, India is the best possible place to try out new technology, according to Mr. Umakant Soni. “If we can get self-driving cars to work on Silk Board in Bengaluru, it will work in the US, it will work in Europe, it will work in the backyard of Elon Musk as well,” he joked.
|
If we can get self-driving cars to work on Silk Board in Bengaluru, it will work in the US, it will work in Europe, it will work in Elon Musk’s backyard as well.
|
["AI Trends"]
|
["IISc", "Indian Institute of Science"]
|
Pritam Bordoloi
|
2022-08-29T16:00:00
|
2022
| 1,450
|
["Go", "machine learning", "artificial intelligence", "startup", "AI", "innovation", "IISc", "RAG", "Ray", "analytics", "R", "Indian Institute of Science"]
|
["AI", "artificial intelligence", "machine learning", "analytics", "Ray", "RAG", "R", "Go", "innovation", "startup"]
|
https://analyticsindiamag.com/ai-trends/behind-indian-government-supported-ai-robotics-innovation-firm/
| 4
| 10
| 3
| false
| true
| false
|
29,556
|
More Than Half Of India Inc Hires Flexi-Staff For IT, Says New Report
|
Indian Staffing Federation (ISF), an apex body of the flexi-staffing industry this week reported that almost 58 percent of Indian companies hire flexi-staff for in the IT sector. Flexi-hiring or flexi-staffing refers to temporary jobs. According to ISF’s latest report, Karnataka, Maharashtra and Delhi NCR are the top three states that dominate the IT flexi-staffing industry in India. Rituparna Chakraborty, president at ISF, told a leading daily, “Organisations across sectors are increasingly opting for flexi-staff due to their flexibility and deep expertise in niche technologies… Perhaps, the IT flexi-staffing industry is expected to observe a paradigm shift in demand and revenue of technology domains. New product development is estimated to have the highest revenue and demand growth rate because of constant innovations and requirement of niche skillsets.” According to their report published in May 2018, the Indian staffing market size was valued at $4.11 billion. The report also said that BFSI, Infrastructure, Construction and Energy and Logistics, Transport and Communications will together employ 1 million flexi-staff by end of 2018. With automation and innovation changing the nature of the work, BFSI contributes 12%, Infrastructure, Construction and Energy contribute 11%, retail and e-commerce together comprise 5% in the national Flexi staff. Perhaps, the retail market’s growth has not only been witnessed in the metropolitan cities but also across numerous tier 2 and tier 3 cities, providing enhanced business and job opportunities for the local youth.
|
Indian Staffing Federation (ISF), an apex body of the flexi-staffing industry this week reported that almost 58 percent of Indian companies hire flexi-staff for in the IT sector. Flexi-hiring or flexi-staffing refers to temporary jobs. According to ISF’s latest report, Karnataka, Maharashtra and Delhi NCR are the top three states that dominate the IT flexi-staffing […]
|
["AI News"]
|
[]
|
Prajakta Hebbar
|
2018-10-24T13:40:11
|
2018
| 234
|
["programming_languages:R", "AI", "innovation", "automation", "GAN", "R"]
|
["AI", "R", "GAN", "automation", "innovation", "programming_languages:R"]
|
https://analyticsindiamag.com/ai-news-updates/flexi-staff-india/
| 3
| 6
| 3
| false
| false
| false
|
10,105,113
|
How Nutanix is Handling Healthcare Challenges in India
|
Mumbai-based Indian pharmaceutical MNC IPCA Laboratories has been leveraging Nutanix’s hybrid multi-cloud for a while now in the production and marketing of drugs, formulations, drug intermediates, and active pharmaceutical ingredients (APIs). Known for its global presence, IPCA is a leading API exporter from India, recognised for its reliable supply chain and cost competitiveness in the pharmaceutical market. “We witnessed a significant increase in application speed, approximately 60 to 70%, compared to the previous standalone setup,” Ashok Nayak, chief information officer, IPCA, told AIM during Nutanix’s flagship conference .NEXT held in Mumbai last week. During the infrastructure transition from legacy systems, the primary goal for IPCA was to identify high-performance systems to ensure uninterrupted 24/7 operation, crucial for maintaining data integrity and security, especially in the pharmaceutical industry. The company sought an infrastructure that could scale up rapidly and efficiently to meet demands as applications were frequently added during their digitalisation journey. “And that is when Nutanix solved our challenges as it provides the necessary functionality to handle large datasets and offered benefits in terms of performance, scalability, and visibility of both centralised and decentralised setups,” Nayak added. High-performance resources were essential to prevent downtime. Key factors making this happen included low latency, application speed, and the management of extensive databases related to various functions such as manufacturing, supply chain, R&D, events, medical services, and clinical trials. How Nutanix is Helping Indian Pharma “Considering the diversity of our customer needs, we ensure assistance regardless of where they choose to deploy their applications,” Andrew Brinded, chief revenue officer at Nutanix told AIM during the event. Customers have varied preferences for handling data in these deployments, with some choosing local analysis and storage, while others opting for centralising information. “What we provide is that common experience wherever it is, you have the same management plane, whether you’re moving workloads, the public cloud wherever you keep your on-premise or whether you’re having an edge and then you have the same data plane across everything,” explained Faiz Shakir, VP and managing director – sales, Nutanix India and SAARC. This approach contrasts with traditional architectures, where managing diverse environments with separate teams for servers, storage, and networking can be complex. In a traditional setup (presumably not using Nutanix), it would typically require a team of five to ten individuals to manage various aspects of the infrastructure, including servers, storage, and networking. However, with Nutanix’s technology and consolidation approach, the need for personnel significantly decreases. “We have seen that under Nutanix, the same tasks can typically be handled by one or two individuals. This reduction in manpower indicates a streamlined and more efficient management process, contributing to overall operational efficiency and cost-effectiveness,” Mike Phelan, SVP of global solution sales at Nutanix told AIM. This consolidation also extends to the data centre, with Nutanix’s hyper-converged infrastructure (HCI) approach leading to savings in floor space and energy consumption. Another important feature which has helped IPCA is the data security solutions provided by Nutanix. Its data storage fabric, along with encryption and compression enhances the security of the company’s data. “In the pharmaceutical industry, traceability of data and proving the source to regulators is crucial, making data security our top priority. Nutanix’s functionality in providing secure data storage has helped us in securing sensitive pharmaceutical data,” said Nayak. Not just IPCA, Apollo Hospitals also leverages the Nutanix cloud platform. By running hospital information system (HIS), electronic medical records (EMR), and Picture Archiving and Communication System (PACS) applications on Nutanix, Apollo witnessed quick hospital admissions, ensured highly available patient records and diagnostic-quality imaging, and freed up IT resources for strategic projects. Managing Generative AI Workloads Back in August, Nutanix came up with Nutanix’s GPT-in-a-Box for its customer base of 25000, offering a unified, scalable AI solution, built on the Nutanix cloud platform that supports GPU-enabled servers for diverse compute, storage, and networking needs. It aims to simplify the deployment and management of AI workloads with open-source collaborations with the PyTorch and Kubeflow MLOps platform and supports a range of LLMs, including Llama2, Falcon, and MPT. Nutanix also partnered with NVIDIA to enhance enterprise AI so that customers can leverage NVIDIA’s GPUs for scalable, efficient AI workloads in modernised data centres and diverse cloud applications. “Our early investments in AI, coupled with the integration of ML for capacity planning, provided a dynamic platform capable of auto-size and auto-scaling,” said Phelan. India as a Market “We have chosen to make significant investments in the Indian market, with approximately a third of our workforce based here,” said Shakir. Nutanix is significantly scaling up its operations in India with several key initiatives including opening a large new facility in Pune in 2022, serving as a crucial hub for their global service and support, and providing round-the-clock assistance. Bengaluru serves as the India headquarters for the company. “India’s market is particularly exciting for us due to its size, economic growth, and the rapid adoption of technology. The enthusiasm for technology in India is substantial, given the country’s strong orientation toward software,” concluded Brinded.
|
With Nutanix, IPCA witnessed a significant increase in the application speed, approximately 60 to 70%, compared to the previous standalone. setup.
|
["Deep Tech"]
|
[]
|
Shritama Saha
|
2023-12-19T10:47:07
|
2023
| 837
|
["Go", "AI", "PyTorch", "ML", "MLOps", "RAG", "Kubeflow", "Aim", "generative AI", "R"]
|
["AI", "ML", "generative AI", "MLOps", "Kubeflow", "Aim", "PyTorch", "RAG", "R", "Go"]
|
https://analyticsindiamag.com/deep-tech/how-nutanix-is-handling-healthcare-challenges-in-india/
| 3
| 10
| 4
| true
| false
| false
|
10,055,981
|
Top 5 Open Source Indian Projects Of 2021
|
Open source projects are publicly available and accessible, promoting collaborative participation, rapid prototyping, transparency, meritocracy, and community-oriented development. The philosophy behind the concept of an open-source ecosystem is the essential respect for a user’s freedom. It is an increasingly important aspect of Digital India. The National Policy on Information Technology 2012 of India has added to its objectives the imperative “Adoption of open standards and promotion of open source and open technologies”. GitHub, the software developer that houses open source projects, has a collection called “Made in India” to highlight the significant contribution of Indians on the platform. In this regard, the article mentions the five emerging open source projects from India in 2021. The Algorithms The Algorithms is an Indian project that documents and models helpful and interesting algorithms using code. The open-source community can cross-check and verify the contributions of others and collaborate to solve problems. The MIT-licensed project implements algorithms in Python, Java, Javascript, C, Go, C++, and other high-level programming languages. Furthermore, the project involves repositories that explain popular algorithms in simple languages with examples and links to their implementation in various programming languages. These algorithm explainer repositories are available in English, French, Hebrew, Indonesian, Korean, Nepali, and Spanish. DIVOC Digital Infrastructure for Vaccination Open Credentialing (DIVOC) is a project by Egov Foundation that aims to create an open-source digital platform for large scale vaccination and digital credentialing programs. Built for an Indian scale, the software is flexible and extendable and can be used across multiple health programmes. The MIT-licensed project was created in India to help digitally orchestrate large scale vaccination and certification using open-source digital public goods that are standards-driven, scalable, and secure. The architecture principles of the projects are: Microservices and API basedInteroperableScalable Flexible and Configurable Observable Resilient Privacy and Security by design Open Standards Trustable Internationalizable indicTrans indicTrans is a Transformer 4x multi-lingual model trained on the Samanantar dataset, which is the largest publicly available parallel corpora collection for Indic languages at the time of writing. The open-sourced project has its repositories available on GitHub. It is a single script model, where all the Indic data is converted to the Devanagari script, which allows for better lexical sharing between languages for transfer learning, prevents fragmentation of the subword vocabulary between Indic languages and allows using a smaller subword vocabulary. There are currently two models – Indic to English and English to Indic, supporting the following 11 Indic languages. Hoppscotch Hoppscotch is an open-source, lightweight request builder and API testing tool used by over 150,000 developers globally. It was built from the ground up with ease of use and accessibility, providing all the functionality needed for API developers with minimalist, unobtrusive UI. It has surged in popularity in just a short period and has become widely adopted by the developer community. Hoppscotch is built with HTML, CSS, SCSS, Windi CSS, JavaScript, TypeScript, Vue, and Nuxt. It can also be installed as a PWA on the device or used on real-time platforms like Test, Websocket, Socket.io, MQTT, and SSE connections. Additionally, the API is customisable, and one can choose their combinations for background, foreground and accent colours. Finally, developers can contribute to the API via GitHub Flow. The MIT Licensed project has over 34.4k stars on GitHub and 2.4k forks. Appsmith Appsmith is a low code project that builds custom business software like admin panels, internal tools, and dashboards. One can use any of the 35+ pre-built UI widgets that connect to any database, GraphQL or REST API. The open-source framework is a toolkit that allows users to build UI visually with pre-made widgets on the canvas to create dashboards, workflows, and admin panels. Everything in Appsmith is a JavaScript object. Developers can contribute and validate their contributions on GitHub. The company is preferred by leading developers in India – AWS, WazirX, Swiggy, IBM, etc.
|
This article is a deep dive into the most promising open-source projects that have popped up in 2021
|
["AI Trends"]
|
["Open Source AI"]
|
Abhishree Choudhary
|
2021-12-16T13:00:00
|
2021
| 641
|
["AWS", "AI", "ML", "TypeScript", "RAG", "Open Source AI", "microservices", "Aim", "Python", "JavaScript", "R"]
|
["AI", "ML", "Aim", "RAG", "AWS", "microservices", "Python", "R", "JavaScript", "TypeScript"]
|
https://analyticsindiamag.com/ai-trends/top-5-open-source-indian-projects-of-2021/
| 3
| 10
| 1
| true
| true
| false
|
10,039,965
|
Top 8 AI-Powered Privacy Tools To Fool Facial Recognition Systems
|
Facial recognition is one of the most controversial forms of AI. People, communities, and many activist groups have raised concerns against this technology because it jeopardises privacy and compromises an individual’s security. While governments of different countries have placed mild to strict restrictions on such tech, there still seems to be a lot of scepticism and fear around. This has given rise to the development of several AI-powered privacy tools that help ‘fool’ facial recognition technology. We have listed some of the most prominent ones. Fawkes Developed at the University of Chicago’s Sand lab, Fawkes very subtly alters photographs at a pixel level to trick facial recognition systems. Fawkes poisons the models by injecting small changes in the images; these changes are invisible to the human eye. This process is called image cloaking. When someone tries to use these cloaked images to build a facial recognition model, it will distort the said image. Anonymizer It has been developed by a startup named Generated Media. It uses GAN escape detection from facial recognition software. With Anonymizer, the user can upload a real photo to the software, generating a variety of fake images that look like the original image. Users can upload these fake images on the social media of their choice. If a facial recognition company scoops these images from the internet, they still would not be able to recognise the ‘real’ user. Anonymous Camera Anonymous camera works by blurring and pixelating entire people or any part of the body feature that can be used to recognise and identify them. It blurs a person while recording videos along with distorting the audio and stripping the video of any metadata that might have got automatically embedded. Also, it processes the captured or recorded data to make sure that it is safe in case their phone gets lost. This app has been used in movements like Black Lives Matter. It’s currently available only on android. CycleGAN CycleGAN was originally developed as an image transformation tool that uses generative adversarial networks to train image-to-image translation models without paired examples automatically. A study showed that CycleGAN can also be used to fool facial recognition technology. It uses GAN to generate completely fake autonomously but real-looking images from a real picture of a human face. GAN uses generative networks to create synthetic data and discriminative networks to assess the quality of the generated images until they met accepted quality standards. Everest Pipkin It is an image scrubber that can be used to remove metadata from a photograph. It also allows the user to selectively blur parts of the image to cover face and other recognisable features. One of the most significant features of this tool is that it can work offline too. A user can load the image on the page or add it to the homescreen and use the tool in no internet mode. Deep Privacy Deep Privacy has been developed by the Norwegian University of Science and Technology. This tool uses machine learning algorithms to synthesise new faces from a database of facial images taken from Flickr to replace the existing image. The software works on generative adversarial networks, which lets the photograph retain original conditions while the key points such as eyes, cheekbones, nose changes. This change stimulates the appearance of a new face when the facial recognition software runs upon them. Brighter.ai Startup Brighter.ai has developed an image and video anonymization software to conceal the individual’s true identity in the photograph. It automatically detects the face and creates a new one with original attributes along with little modifications. The new face escapes from facial recognition software. The program is customisable to the user’s needs. It also removes metadata like a bodily feature or location that could play a part in identification. Face Blur Face Blur is a tool from the SightCorp, an AI spin-off from the University of Amsterdam. It specialises in developing easy-to-use facial analysis software. Face Blur uses deep learning technology to detect faces in any video stream that needs to be hidden. The application blurs or pixelates the face but the audio and other features in the video remain untouched. It can also redact individual faces or crowds.
|
Facial recognition is one of the most controversial forms of AI. People, communities, and many activist groups have raised concerns against this technology because it jeopardises privacy and compromises an individual’s security. While governments of different countries have placed mild to strict restrictions on such tech, there still seems to be a lot of scepticism […]
|
["AI Trends"]
|
["AI (Artificial Intelligence)", "Facial Recognition"]
|
Meenal Sharma
|
2021-05-12T12:00:00
|
2021
| 699
|
["Go", "machine learning", "Facial Recognition", "synthetic data", "AI", "programming_languages:R", "programming_languages:Go", "deep learning", "GAN", "R", "AI (Artificial Intelligence)", "startup"]
|
["AI", "machine learning", "deep learning", "R", "Go", "GAN", "synthetic data", "startup", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-trends/top-8-ai-powered-privacy-tools-to-fool-facial-recognition-systems/
| 3
| 10
| 1
| false
| true
| true
|
10,001,019
|
These Layers Of An IoT Architecture Are Crucial For Effective Solutions
|
It’s obvious to code using a programming language without understanding how the compiler works. The same can be said about many of the technologies and devices that we take for granted. Who among us can explain how the mobile is capable of tracking our current position? We typically acquire just enough to apply technology without taking the time to understand how and why it operates. Why Should We Understand IoT It’s simple to imagine IoT as a design that tells us what it’s producing. For example, everyone can enclose their heads around the idea of a microwave oven that sends a text message when it’s completed its cycle or a fuel indicator that turns on a dashboard light when it requires fuel. Having this elementary level of knowledge enables us to use IoT without having to speculate about its foundations. It is essential to understand IoT down to the bits and bytes that might operate across a BLE (Bluetooth Low Energy) connection, one cannot overlook the fact that there’s a BLE bond and that messages are crossing it. We also need to understand how a value described by a sensor can convert into an actionable object prepared by business importance. IoT Layers The significant components of a common IoT platform include a sensor to the cloud, cloud to an enterprise, and ultimately, to display consumption. The apparent view of a platform is to divide it into three distinct layers namely perception, network, and application. Each layer may be partitioned into its individual assortment of layers, but as an obvious way to understand architecture, three full layers works well. Perception Layer The perception layer consists of the physical components that collect telemetry erudition, normalize the data, and eventually pass it to cloud(Network) applications for ingestion, processing, and warehouse. This is where we find the sensors that identify temperature, air pressure, orientation, location, light, heart rate, blood pressure, weight, and a myriad of different data points. These sensors can be miniature than a grain of sand or larger and substantial than anything we can comfortably carry. Sensors are the devices that create smart cities smart and automate our homes and office buildings. Sensors dispatch their data over any number of different transports and protocols. Common data transports incorporate LTE, Wi-Fi, BLE, LoRa (Long Range), and ULE(Unidirectional Lightweight Encapsulation). Distance, power consumption, expense, and size are all determinants in the determination of transport for an appropriate sensor. Sensors move data to cloud network applications in one of two ways. Chiefy, some transfer their telemetry data directly to the cloud. This demands a sensor supports the appropriate data transport. For instance, Wi-Fi and LTE would work for these sensors. The second and more traditional way is for a sensor to attach to a gateway, which in turn transfers data between the perception layer and the network(cloud) layer. These gateways gross data from different sensors of potentially with different types. They then normalize the data before serving as a conduit up to the network. Conversely, they can transfer data from the cloud(network) to the sensor. Network Layer The Network(Cloud) layer serves as the integrated point for all connected sensors and gateways. Among additional things, cloud services often provide: Provisioning tools for all perspectives of the IoT platform Mechanisms to meter, filter, format, organize and collect telemetry data. Data storage can be short and/or long term Tools to execute data flow and stream processing A rules engine to convert incoming data into actionable items. For instance, send an email if the air pressure for a particular sensor drops down below specified value. External introduction mechanisms such as a RESTful API or MQTT(Message Queuing Telemetry Transport) As with any network(Cloud) strategy, these services can be treated publically, privately, or in a heterogeneous manner. Application Layer The Application layer is a combination of business utilisation that obtains the functionality presented by the network layer (examples: through RESTful APIs). These may be cloud network applications themselves or live inside the bounds of an enterprise’s on-premises network. For example, Zang IoT workflows are still part of the enterprise layer even though they are treated in the Zang public cloud. This means that the enterprise layer is not exclusive to the functionality implemented by an enterprise’s own resources. Think of it as a logical collection of applications that access the cloud layer to perform business logic. The enterprise layer is where service management technology continues. For example, the cloud-based ServiceNow ITSM (IT Service Management) system can be used to produce IoT into an enterprise and allow it to be treated by humans and machines. Depending on the platform, some functionality can exist in either the network or the enterprise level. An example of this crossover is an IoT dashboard. Some platforms might develop a real-time dashboard inside their cloud network offering, while also offering businesses the ability to create their own via an MQTT connection. One Comprehensive System This configuration of services gives a manageable and extremely scalable platform that can promote anything from a handful of centrally located sensors to millions of sensors dispersed across the world. Solutions would need to be masterminded to establish the required connections, storage, and both real-time and historical admittance to the collected data. It’s critical to not over provision and waste money, as well as under provision and miss crucial data. It is to be mentioned that security is predominant and must be incorporated at every level. This involves encrypting data in transit as well as data at rest. Access to API functions must be confirmed. It’s also necessary to provide devices to ensure that rogue devices and gateways are identified and isolated from other devices and cloud applications. IoT data will be practised in life or death situations, and it’s essential that the data can be trusted beyond any suspicion.
|
It’s obvious to code using a programming language without understanding how the compiler works. The same can be said about many of the technologies and devices that we take for granted. Who among us can explain how the mobile is capable of tracking our current position? We typically acquire just enough to apply technology without […]
|
["AI News"]
|
["IoT", "iot enterprise architecture"]
|
AIM Media House
|
2019-01-22T20:58:05
|
2019
| 967
|
["stream processing", "API", "programming_languages:R", "AI", "iot enterprise architecture", "Scala", "RAG", "programming_languages:Scala", "Rust", "GAN", "R", "IoT"]
|
["AI", "RAG", "R", "Rust", "Scala", "API", "stream processing", "GAN", "programming_languages:R", "programming_languages:Scala"]
|
https://analyticsindiamag.com/ai-news-updates/these-layers-of-an-iot-architecture-are-crucial-for-effective-solutions/
| 3
| 10
| 2
| true
| true
| false
|
10,166,645
|
Nokia, Honeywell Join Numana to Advance Quantum-Safe Networks
|
Finnish multinational telecom company Nokia Corporation and US-based Honeywell Aerospace Technologies have partnered with Canadian non-profit Numana to drive the development of quantum-safe networks. The collaboration will use Numana’s Kirq Quantum Communication Testbed in Montreal, Quebec, to test and validate quantum-secure technologies. This initiative addresses the growing need for secure digital infrastructure in North America and globally, focusing on mitigating quantum security threats. The partnership involves contributions from Nokia, Honeywell, and other ecosystem partners. Nokia will provide expertise in post-quantum networking, including advanced IP routers and optical transport nodes. Moreover, Honeywell will deliver quantum-secure encryption keys for space-to-terrestrial communication. Francois Borrelli, interim CEO at Numana, emphasised that the testbed will enable enterprises, researchers, and government agencies to explore and validate secure networking technologies in real-world conditions. The Kirq facility will also serve as a hub for education, ecosystem development, and research. Workshops and training sessions are planned to build awareness and skills in quantum-safe technologies. Lisa Napolitano, VP and GM of space at Honeywell Aerospace Technologies, highlighted the importance of securing satellite networks. “Our quantum encryption technology will play a critical role in improving the integrity of data transmitted from space to Earth.” The initiative aligns with commitments by Quebec and Canada to lead in cybersecurity and quantum innovation. By fostering collaboration among industry leaders, academia, and government agencies, the partnership aims to accelerate the adoption of quantum-safe solutions while preparing organisations for the challenges of the Quantum 2.0 era. Collaborative research projects will focus on innovative solutions for secure network connectivity. “This partnership is a step toward creating a quantum-secure economy,” said Jeffrey Maddox, president of Nokia Canada. India has also been working towards secure satellite-to-ground quantum communication, which was explained to AIM by Urbasi Sinha, professor at the Raman Research Institute, in an interview about her research. For the country, secure quantum technology isn’t just an aspiration, it’s a national mission. Led by Ajai Chowdhry, the co-founder of HCL and the founder of EPIC Foundation, India’s National Quantum Mission places quantum at the forefront of the country’s strategic priorities.
|
Nokia will provide expertise in post-quantum networking, advanced IP routers and optical transport nodes.
|
["AI News"]
|
["honeywell", "Nokia", "quantum", "quantum technology"]
|
Sanjana Gupta
|
2025-03-25T17:13:39
|
2025
| 341
|
["Go", "programming_languages:R", "AI", "innovation", "programming_languages:Go", "Git", "Nokia", "quantum", "Aim", "ViT", "GAN", "R", "honeywell", "quantum technology"]
|
["AI", "Aim", "R", "Go", "Git", "GAN", "ViT", "innovation", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-news-updates/nokia-honeywell-join-numana-to-advance-quantum-safe-networks/
| 2
| 10
| 2
| false
| false
| false
|
10,129,388
|
Anthropic Launches Free Android Claude AI Chatbot to Expand Mobile Reach
|
Anthropic, the artificial intelligence company backed by Amazon, has officially released its Claude mobile app for Android devices. This launch comes after the app’s initial debut on iOS, marking a significant expansion of Claude’s availability to a broader mobile user base. Free Claude’s AI ChatBot for Android Brings Advanced AI Features The new Android app offers users Free access to Anthropic’s Claude 3.5 Sonnet model, providing a range of sophisticated AI capabilities. Key features include: Cross-platform synchronization, allowing users to continue conversations across devices Real-time image analysis through photo uploads or camera captures Multilingual processing and translation Advanced reasoning for complex problem-solving The app is available for free download on the Google Play Store, with both free and premium subscription options. Free users can access the basic functionalities, while Pro and Team plans offer additional benefits such as increased usage limits and access to more advanced models. Google Play store Download Link for Claude by Anthropic Anthropic’s Strategic Move to Capture Android Market Share Anthropic’s decision to launch on Android after iOS appears to be a calculated business strategy. By initially targeting Apple’s ecosystem, known for its high-spending and early-adopter user base, Anthropic likely aimed to establish a premium positioning for Claude. The subsequent Android release now allows Anthropic to tap into the world’s largest mobile operating system, with over 3 billion active users. This move positions Claude to compete more directly with other AI assistants like OpenAI’s ChatGPT, which has already gained significant traction on both iOS and Android platforms. As the AI assistant market continues to evolve rapidly, Anthropic’s expansion to Android could prove crucial in scaling its user base and collecting valuable real-world data to improve its AI models. The coming months will be critical in determining whether Claude can leverage this opportunity to challenge the dominance of established players in the mobile AI assistant space
|
Anthropic has officially released its Claude AI chatbot mobile app for free in for Android devices.
|
["AI News"]
|
["Claude"]
|
MIA
|
2024-07-17T15:04:16
|
2024
| 309
|
["Anthropic", "ChatGPT", "AI assistants", "artificial intelligence", "OpenAI", "AI", "Claude", "RAG", "Aim", "Claude 3.5", "R"]
|
["AI", "artificial intelligence", "ChatGPT", "OpenAI", "Claude 3.5", "Anthropic", "Aim", "RAG", "AI assistants", "R"]
|
https://analyticsindiamag.com/ai-news-updates/anthropic-launches-claude-ai-chatbot-for-android-to-expand-mobile-reach/
| 2
| 10
| 2
| true
| false
| false
|
10,165,412
|
bp’s Digital Core Discovery Event Returns to Pune to Showcase Tech-Driven Energy Transformation
|
British petroleum company, bp, is set to host an exclusive event titled “Energy Meets Innovation” — bp’s digital core discovery event on March 28, at the JW Marriott Hotel in Pune. This event, in partnership with AIM Media House, will showcase how digital technologies, artificial intelligence (AI), and cloud computing are redefining the energy sector, aligning with bp’s vision of driving growth, efficiency, and sustainability. The summit will bring together industry leaders, technology experts, and professionals to discuss how digital innovation is shaping the future of energy, benefiting both India and the global community. This upcoming event follows the success of bp’s previous Digital Tech Summit, which was held on January 31. The summit had provided a platform for attendees to engage with technological advancements in the energy sector. This month’s event aims to build on this momentum with deeper discussions and hands-on experiences. Register Now bp’s digital core discovery event will feature discussions from bp leaders and technology partners, including Microsoft, AWS, and Salesforce. Attendees will gain insights into how digital transformation is enhancing energy security, optimising operations, and driving sustainability. The event will kick off with a welcome address by Arnab Nandi, Vice President of bp Tech Centre, followed by a keynote from Fredrika de Courcy Arora, Vice President of Digital Workplace at bp, who will discuss how technology is shaping bp’s mission to address the energy trilemma. Sessions throughout the day will explore how bp and AWS are innovating solutions for energy businesses and customers, how Microsoft is enabling world-class cloud infrastructure and workplace systems at bp, and how Salesforce is delivering industry-leading solutions for bp’s operations. The summit will also highlight the partnership between SAP and bp in implementing a world-class ERP (enterprise resource planning) platform. The event will also provide insights into career opportunities at bp’s digital tech hub in Pune, showcasing pathways for professionals in software engineering, enterprise technical engineering, data analytics, service delivery, architecture, product management, and security. Experts specialising in Microsoft 365, Azure, AWS, SAP, Salesforce, and ServiceNow are encouraged to participate. With the success of the summit held in January setting the stage, this month’s event is expected to further accelerate discussions on the role of technology in the energy sector, making it a must-attend gathering for professionals looking to be at the forefront of this transformation. bp’s Bond with India bp’s long-standing commitment to India is evident in its strategic partnerships and significant contributions to the nation’s energy landscape. As one of the largest integrated energy companies globally, bp accounts for 30% of India’s natural gas production and collaborates with key players like Reliance and ONGC to bolster the country’s energy security. Register Now The event will also spotlight innovations from Castrol, a bp company and a leader in the Indian automotive lubricants market. With over a century-old presence in India and a reputation for high performance and quality, Castrol continues to drive innovation. This is exemplified by its Edge X product, endorsed by Bollywood superstar Shah Rukh Khan. So, mark your calendar for this event and join bp in driving the future of energy innovation.
|
The summit will bring together industry leaders, technology experts, and professionals to discuss how digital innovation is shaping the future of energy.
|
["AI Highlights"]
|
["bp energy event", "bp event", "bp tech event", "british petroleum event", "Digital Core Discovery Event", "Digital event", "Energy Transformation", "Pune event", "Tech-Driven"]
|
Vidyashree Srinivas
|
2025-03-07T12:47:24
|
2025
| 516
|
["Pune event", "Git", "Energy Transformation", "R", "artificial intelligence", "RAG", "analytics", "Tech-Driven", "bp tech event", "AWS", "AI", "cloud computing", "bp event", "british petroleum event", "Digital event", "Digital Core Discovery Event", "Aim", "bp energy event", "Azure"]
|
["AI", "artificial intelligence", "analytics", "Aim", "RAG", "cloud computing", "AWS", "Azure", "R", "Git"]
|
https://analyticsindiamag.com/ai-highlights/bps-digital-core-discovery-event-returns-to-pune-to-showcase-tech-driven-energy-transformation/
| 3
| 10
| 5
| true
| false
| false
|
10,074,136
|
Stable Diffusion, a milestone?
|
A CERN scientist created the world wide web (www) in 1989 to meet the growing demands for automated information sharing between scientists globally. Although it was a significant thing, it wasn’t until it was made public in 1993 that it altered the way we live. The web could not have flourished if CERN hadn’t decided to make it available under an open licence. Much like CERN, Stability AI has also chosen to alter how people view its technology by allowing them to interact with it freely. Recently, Emad Mostaque, the founder of Stable Diffusion, announced that the codes for Stable Diffusion would be open. In the light of this announcement, the speculation around the launch of this AI generator as just another text-to-image generator quickly turned into Stable Diffusion’s definitive reputation as a game changer. “I’m running Stable Diffusion locally, and it is mind-blowing what it can do; I’ve done a paint that would take me 6+ hours to make in one hour and a half with its helIt’st’s amazing,” says a user on Krita’s social platform. What’s happening? Since Stable Diffusion is open-sourced, users can either explore it online or download the model directly on their systems. In addition to its general user accessibility, the model is also available for commercial purposes. While launching, Emad Mostaque had claimed that the “Code is already available as is the dataset. So everyone will improve and build on it.” As it turns out, people are already improving it. In a now-viral Reddit post, a user claimed to have prompted an image with a text to generate a hyper-realistic image of a faraway, futuristic metropolis with lofty skyscrapers enclosed in a massive transparent glass dome. The model was able to create the image as directed in the image prompt and even took minute details from the text prompt into account. Considering the reactions to the resultant image, it is no wonder that Mostaque decided to introduce this feature in DreamStudio. Numerous Stable Diffusion plug-ins are being introduced by users on Twitter and Reddit. This would undoubtedly spur additional innovation in the area. For instance, a Figma Plug-in can function similar to one where a user is able to generate practically anything by providing information about a subject’s fundamental shape and position. Several other plug-ins have also been created. For example, a Reddit user asserted that they had successfully created a Photoshop plug-in. You can create one complete image by blending the spaces between two images with the plug-in. Another user made a Stable Diffusion plugin for Krita, while an animated video using Stable Diffusion was made. While the video is yet to achieve a better quality, it still makes one wonder what the future holds for AI art. In addition to plug-ins, by using the diffusers library, a collab notebook with a Gradio GUI can do inpainting with Stable Diffusion. For example, this Twitter user was able to replace a dog with a panda in an image with the help of Stable Diffusion. https://twitter.com/1littlecoder/status/1563555878414225409 What does the future hold? Adding a Stable Diffusion plugin to photoshop may seem revolutionary to some. However, adding the same plugin to Blender has actually proven revolutionary for a certain user on GitHub who has even open-sourced their code. It is widely believed that the inclusion of Stable Diffusion in Blender is likely to speed up the creation of animation and visual effects in motion pictures. In case this combination proves successful, it is also expected to speed up the evolution and efficacy of Metaverse. These speculations have gained further momentum in the light of Eros Investment’s decision to team up with Stable Diffusion. The collaboration entails partnership between Eros Investment and Stable Diffusion on projects in the fields of education, healthcare, and generative meta-humans. Eros is placing its bets on Stable Diffusion’s capacity to produce unique 3D avatars, which can then be used in metaverse or AR/VR games. The efforts from both ends is also expected to make it simpler to create fictional content. According to Kishore Lulla, Chairman of Eros Investments, “Users now have an opportunity for creative expression at a pace that didn’t exist before. Deep AI technology will be the future of product differentiation, and we are excited to lead this revolution.” Recently, Emad Mostaque had also claimed that they “expect the quality to keep improving as Stability AI introduces quicker, better, and more specialised models.” He further added that they intend to add audio soon, followed by additions of 3D and video features. However, the opportunities for such combinations aren’t limited. It is believed that they can be introduced in Canva along with platforms like WordPress, which could introduce official plugins. It is also exhilarating to imagine what would entail if Google decides to build a generative search engine in the future! The future of this technology may be difficult to predict as of now but as Mostaque claims, “Stable Diffusion is a cutting edge AI that is open and inclusive”, it seems like we can look forward to it.
|
Stable Diffusion recently decided to go public. Since then, there have been several significant developments around it which makes one wonder—will Stable Diffusion change the entire text-to-image generation industry? And, what is the scope behind that?
|
["AI Features"]
|
["AI Tool", "Emad Mostaque", "OpenAI", "StabilityAI", "Stable Diffusion"]
|
Lokesh Choudhary
|
2022-09-01T16:00:00
|
2022
| 837
|
["Go", "StabilityAI", "OpenAI", "Emad Mostaque", "Stable Diffusion", "AI", "innovation", "Git", "Aim", "stable diffusion", "edge AI", "Gradio", "AI Tool", "GitHub", "R"]
|
["AI", "Aim", "Gradio", "edge AI", "R", "Go", "Git", "GitHub", "stable diffusion", "innovation"]
|
https://analyticsindiamag.com/ai-features/stable-diffusion-a-milestone/
| 2
| 10
| 1
| true
| true
| false
|
10,039,794
|
Now Red Hat Is Giving Away This For Free
|
Have you ever used a machine learning algorithm and got confused by its predictions? How did it make this decision? How do we ensure trust in these systems? To answer these questions, recently, a team of researchers at Red Hat introduced a new library known as TrustyAI. TrustyAI looks into explainable artificial intelligence (XAI) solutions to address the trustworthiness in machine learning and decision services landscapes. This library helps in increasing the trust in decision-making processes that depend on AI predictive models. Why this research Automation of decisions is crucial to deal with complex business processes that can respond to changes in business conditions and scenarios. The researchers stated, “The orchestration and automation of decision services is one of the key aspects in handling such business processes. Decision services can leverage different kinds of predictive models underneath, from rule-based systems to decision trees or machine learning-based approaches.” “One important aspect is the trustworthiness of such decision services, especially when automated decisions might impact human lives. For this reason, it is important to be able to explain decision services,” the researchers added. This is the reason why the researchers created this XAI library. The library leverages different explainability techniques for explaining decision services and black-box AI systems. Tech behind TrustyAI Explainability Toolkit is an open-source XAI library that offers value-added services to a business automation solution. It combines machine learning models and decision logic to enrich automated decisions by including predictive analytics. In particular, the TrustyAI Explainability Toolkit leverages three explainability techniques for black-box AI systems, which are LIME: Local Interpretable Model-agnostic Explanations (LIME) is one of the most widely used approaches for explaining a prediction generated by a black-box model.SHAP: SHAP or SHapley Additive exPlanations is a popular open-source library that works well with any machine learning or deep learning model.Counterfactual explanation: Counterfactual explanation is an essential approach in providing transparency and explainability to the result of predictive models The researchers investigated the techniques mentioned above, benchmarking both LIME and counterfactual methods against existing implementations. For benchmarking, they introduced three explainability algorithms, TrustyAI-LIME, TrustyAI-SHAP and the TrustyAI counterfactual search. Contributions of this research The important contributions made by the researchers are- TrustyAI Explainability Toolkit is the first comprehensive set of tools for explainability AI that works well in the decision service domain.This research is an extended approach for generating Local Interpretable model-agnostic explanations, especially built for decision services.The research showed a counterfactual explanation generation approach based on constraint problem solver.An extended version of SHAP that enables background data identification and includes error bounds while generating confidence scores.In terms of sparsity, TrustyAI manages to fully satisfy the requirement of changing the least amount of features as possible. Wrapping up According to the researchers, local explanations generated with TrustyAI-LIME are more effective than LIME reference implementation. It does not require training data to accurately sample and encode perturbed samples, making it fit better in the decision service scenario. The planned extensions to SHAP within TrustyAI-SHAP have the potential to greatly improve diagnostic ability when designing explainers. TrustyAI-SHAP aims to address feature attributions by providing accuracy metrics and confidence intervals. Lastly, the TrustyAI counterfactual search achieved good performance relative to the Alibi baseline. TrustyAI requires significantly less time to retrieve a valid counterfactual.
|
Have you ever used a machine learning algorithm and got confused by its predictions? How did it make this decision? How do we ensure trust in these systems? To answer these questions, recently, a team of researchers at Red Hat introduced a new library known as TrustyAI. TrustyAI looks into explainable artificial intelligence (XAI) solutions […]
|
["AI Trends"]
|
["library", "red hat"]
|
Ambika Choudhury
|
2021-05-08T13:00:00
|
2021
| 540
|
["artificial intelligence", "machine learning", "red hat", "AI", "library", "R", "predictive analytics", "RAG", "Aim", "deep learning", "analytics", "xAI"]
|
["AI", "artificial intelligence", "machine learning", "deep learning", "analytics", "xAI", "Aim", "RAG", "predictive analytics", "R"]
|
https://analyticsindiamag.com/ai-trends/now-red-hat-is-giving-away-this-for-free/
| 3
| 10
| 1
| false
| false
| false
|
10,168,382
|
AI in Music Needs to Be Controlled, Says AR Rahman
|
In the past years, the use of AI tools in the music industry has received mixed reactions. In India, the debates sparked after renowned Indian composer A.R. Rahman used AI to recreate the voices of two late singers, Bamba Bakya and Shahul Hameed, for Rajinikanth’s 2024 film Lal Salaam. In a recent interview with PTI, Rahman addressed the controversy and shared his thoughts on the larger implications of AI in music. “Some of the songs are so filthy, it needs to be controlled because if it’s not, there’ll be chaos,” Rahman said, talking about some of the music created by AI trying to mimic other artists. While Rahman acknowledged the power of AI to empower creators who previously lacked resources, he also cautioned against its unchecked misuse. “There are good and bad aspects,” he noted. ‘The good things should be used to empower people who never had the chance to put their vision into action. However, overusing it is detrimental to us. It’s like mixing poison with oxygen and breathing it in.’ Rahman clarified that when he used AI in Lal Salaam, it was done ethically. Permission was obtained from the families of the late singers, and they were also compensated. “It’s a tool to speed up things, not to fire people,” Rahman said in a previous statement defending his decision. Rahman emphasised the urgent need for guidelines and digital ethics. “There should be rules, like certain things you can’t do. Like, how they talk about ethics or behaviour in a society—this is also behaviour in the software and digital world.” On the other hand, amidst copyright claims and the need for fairly compensating artists, it becomes an uphill task for AI startups, such as Suno.ai or Udio AI, to gain revenue and popularity. For example, while speaking earlier with AIM, Mikey Shulman, CEO of Suno stated that the AI platform by the company is not just making music, but musicians as well. However, there are two sides of this debate. Beatoven.ai, an Indian AI music startup, has gotten the hang of it in the most ethical and responsible way possible. CEO Mansoor Rahimat Khan and his team started contacting small and slowly bigger artists for creating partnerships and sourcing their own data. The company had a headstart as no one was talking about this field back then. Within a year, it amassed more than 100,000 data samples, which were all proprietary for them. When it comes to Beatoven.ai, Khan told AIM that he aims to head in a more B2B direction as building a direct consumer app does not make sense. “I don’t believe everybody wants to create music,” added Khan, saying that not everyone is learning music in the world. That is why, the company is currently focused only on background music without vocals.
|
Rahman earlier had recreated two singers’ voices using AI
|
["AI News"]
|
["AI (Artificial Intelligence)", "AR Rahman", "music"]
|
Merin Susan John
|
2025-04-22T13:11:54
|
2025
| 466
|
["AR Rahman", "Go", "AI music", "programming_languages:R", "AI", "data_tools:Spark", "programming_languages:Go", "Git", "music", "Aim", "R", "AI (Artificial Intelligence)", "startup"]
|
["AI", "Aim", "R", "Go", "Git", "startup", "AI music", "programming_languages:R", "programming_languages:Go", "data_tools:Spark"]
|
https://analyticsindiamag.com/ai-news-updates/ai-in-music-needs-to-be-controlled-says-ar-rahman/
| 2
| 10
| 2
| false
| true
| false
|
46,961
|
Top 10 Data Scientists In India — 2019
|
Each year, Analytics India Magazine publishes the annual list of the most prominent data scientists in India. This list recognises the data science professionals who have had an exceptional journey in the domain and have contributed unique innovations and unparalleled accomplishments. 2019 is the fifth year of the industry-acclaimed list and we have 10 names with diverse backgrounds who have made significant contributions in the field. From building data science teams to developing data science solutions, these data scientists have numerous credits to their name. For the list, we have considered data scientists working with organisations or independently, irrespective of the size and nature of work. Also, we do not repeat names from previous years, so, do check earlier years’ inclusions. Read our previous list: 2018 | 2017 | 2016 (The names are listed in alphabetical order) Aravind Chandramouli Analytics & Data Science Journey: Chandramouli started his career with an internship at Google in 2007 and has worked with Microsoft, GE Research Labs, Fidelity Investments over his 12-year career. At Fidelity Investments, Aravind led a team of over 10 data scientists solving tough business problems in the areas of Machine Learning and Text Mining. Current Role & Responsibilities: Chandramouli is the Principal, Data Science at Tredence. He currently heads the data science practice at the company with a focus on innovation. Significant Achievements: He is credited with creating Agent Virtual Assistant, that could reduce the customer hold time by enabling the customer service agent to answer customer questions faster. The system used a combination of Information retrieval techniques, modified language models, deep learning methods to mine question and answers from customer chat. This system was recognised with the President’s Award for Innovation in 2018 at Fidelity Investments India (awarded to just one team). The team also filed for a patent that was granted. He has six patent applications with two grants under his name and over 10 publications in top conferences and journals. Education: Chandramouli has a PhD in Computer Science from the University of Kansas with a focus on Information Retrieval and Machine Learning. Arpan Gupta Analytics & Data Science Journey: With a rich experience of over 16 years in consulting across CPG and retail sectors, Gupta has been instrumental in using analytics for planning and fine-tuning their brand strategy. He has worked extensively in developing analytics solutions that have been used by global brands to deepen their understanding of consumers in different markets. He was also a part of the initial seed team at Marketics Technologies, a successful industry startup in analytics which was acquired by WNS in 2008. Current Role & Responsibilities: Currently he is part of the leadership team at TEG Analytics, pivoting a crucial strategy around building analytics products. Significant Achievements: With an ability to convince multiple stakeholders through data insights, Gupta has witnessed the transition of the data science practise from the early years of using SAS based tools to the ML/AI based algorithmic paradigms today. While there are several experts in the data sciences field who can champion the use of advanced methods to solve business problems, Gupta is one of the few in the industry who believes in focusing on this human connection. Education: MBA from IIM Ahmedabad with a background in Physics from IIT Kharagpur. Muthumari S Current Role & Responsibilities: Muthumari S is the Sub-Business Unit Head, Brillio Analytics, specialized in delivering AI/ML use-cases at scale across customer/product/marketing analytics, NLP and vision analytics. In her current role, she works with CXOs, in enabling their organizations to make accurate data-driven decisions as well as solve ambiguous problems involving unstructured text/image and machine-generated data with tangible business impact. Analytics & Data Science Journey: Her natural affinity for numbers and being able to draw sense out of them enabled her to start her career with analytics/data science during the recession. She was fortunate to work on her first project on developing a macro-econometrics model for a US Software provider on quantifying the impact of recession on their product sales. Over a span of 12 years, she has worked on multiple data science projects including hyper-personalization for eCommerce customers across industries, image-based recommendations and consumer prediction using NLP. Significant Achievements: She has evangelized AI/ML with at least 10 Fortune 500 accounts which she managed primarily in the technology, telecom and media industries. While AI at scale was still being talked about in 2016, she was a key contributor in the patent on Video Analytics – that was targeted at making sense of the humongous amount of unstructured data created by different forms of media devices and digital content. Education: She holds a Bachelor’s degree in Engineering from College of Engineering Guindy, Anna University. Manu Chandra Analytics & Data Science Journey: Chandra has over 20 years of experience in data science consulting and training across various domains such as fraud management, credit risk and customer analytics. He has deep expertise in applying predictive analytics using deep learning and machine learning to solve complex problems to create scalable solutions. Prior to founding MathLogic, he has worked with American Express and Accenture Consulting across multiple geographies. Current Role & Responsibilities: He is the co-founder and Chief Data Scientist at FN MathLogic Consulting Services Pvt Ltd. Significant Achievements: He has worked on key projects across industries such as developing a forecasting model to predict cash disbursement from an ATM, creating a pricing framework for a wealth management firm, developing credit risk models and migration of codes to SPARK. Chandra also has a deep passion for teaching machine learning and deep learning and has trained over 2000 professionals. Education: Manu holds a Master’s degree in Business Administration from MDI Gurgaon and a Bachelor’s degree in Engineering from IIT Delhi. Naveen Xavier Data Science Journey: Xavier has an interesting journey being a veteran from the Indian army and having served the country in multiple operations in the J&K sector. A technocrat at heart, he has worked on diverse assignments across Intelligence, IT & Telecom domain. He was handpicked for NATGRID, a coveted assignment at the Home Ministry for the establishment of a predictive analytics framework for countering terrorism. There he played a key role in applying machine learning to uncover hidden patterns using national databases. Current Role & Responsibilities: Xavier currently heads Data & AI Products vertical at the Aditya Birla Group. He is a ‘change agent’ who is propelling the transformation of a legacy group to embrace AI solutions built on a hybrid cloud model. He is driving a unique blend of open innovation, rapid prototyping, industry-grade productisation, and enterprise-level scale-up. Xavier along with his team of data scientists are working on solving complex problems by optimising deep learning techniques. He is driving cross-industry innovation by adopting analogies through contextualization in the field of computer vision, language understanding and industrial sensor technology. Significant Achievements: Post-retirement, Xavier turned an entrepreneur and co-founded DataVal Analytics with Dr Sam Pitroda, which provided boutique analytics services to companies across the globe. The start-up has the unique distinction of cracking the Facebook bAbI Challenge using Natural Language Understanding. Education: Xavier is an alumnus of the National Defence Academy. He holds an Engineering Degree in IT & Telecom from Military College of Telecommunication Engineering as well as a Master’s Degree in Software Systems from BITS, Pilani. Omprakash Ranakoti Analytics & Data Science Journey: A bioinformatician turned data scientist, Rankoti has over 14 years of experience in statistical modelling, machine learning and deep learning. He started his data journey while working on a predictive diagnostic tool for prostate cancer to automate data collection and analyse large amount of data, and has since delivered over 50 advanced analytics solutions in his career. He has since worked in various industries such as CPG, media & entertainment, healthcare and life sciences. Rankoti has also been instrumental in building an R package as Integration of R and HPCC system to use the facility for Big Data analysis. Current Role & Responsibilities: He is currently the principal data scientist at Genpact and leads a team of experts in ML/DL at Genpact Analytics. At Genpact, he champions projects on data science and works with clients with a focus on providing business value through innovation. Currently, his research is focused on developing machine learning models for workforce scheduling and optimization for a large call centre. Significant Achievements: He has been recognised with various awards at Genpact, Symphony Services for his contribution to the development of efficient operations through data science. Education: Rankoti is a Post Graduate in Bioinformatics from the Institute of Bioinformatics and Applied Biotechnology. Satyamoy Chatterjee Analytics & Data Science Journey: Chatterjee has led and demonstrated measurable business impact across different verticals such as Banking and Financial Services, Healthcare, Retail etc. In his past roles with Citigroup, he created business impacts by application of predictive and prescriptive analytics for customer lifecycle management across asset and liability products for the bank. Current Role & Responsibilities: Chatterjee currently is the executive vice president and heads the client solutions and product strategy at Analyttica Datalab. He has been playing pivotal roles in designing prototypes for analytics and AI product development with the goal of inventing and creating IP portfolio for his company. Significant Achievements: He has been a think tank for Analyttica in conceptualizing and steering the product ATH Precision. Satyamoy has a design patent in innovation in the field of experiential analytics and knowledge immersion. He also has a patent extension on designing adaptive machine learning systems. He also set up the mortgage analytics practice for Global Decision Management in Citigroup for it’s North America Mortgage business and created rapid impacts by application of advanced analytics and data science methodologies to significantly reduce customer attrition. Education: He has an MS in Industrial Engineering from Wichita State University (2003) and did his Executive General Management from IIM Bangalore in 2012. Srivatsa Kanchibotla Current Roles and Responsibilities: Kanchibolta is the Principal Data Scientist and Senior Partner at TheMathCompany. He currently works on building the next generation of AutoML tools that can develop and deploy ML models at scale. With a view to addressing this distressing gap, his current focus is on building frameworks and tools that allow a Data Scientist to easily build, test, experiment and deploy ML models. Analytics and Data Science Journey: With more than 13 years of experience across retail as shop floor engineer and academia as a physics teacher, he stepped into a data science career. In his career, he has worked with 5+ Fortune 500 companies creating millions of dollars’ worth of impact for clients, alongside building and managing large teams of Data Engineers and Data Scientists alike. Prior to TheMathCompany, he worked with Mu Sigma as a Senior Manager. Significant Achievements: He has developed state-of-the-art Repairs Forecasting models across 70 product lines for the World’s largest technology company, built analytics CoEs for multiple Fortune 500 Insurance giants. He also developed various other analytics solutions such as Fraud Abuse and Waste (FWA) detection framework and Pattern Recognition algorithms that detect in-game events during a live sports event, based on activity data. Education: Srivatsa holds a Bachelor’s degree from IIT Madras. Sriram Krishnamurthy Analytics & Data Science Journey: With more than 17 years of experience in data science and consulting, he has led data science initiatives across North America, Europe and APAC. He has held key roles in the data science and analytics function at Western Union corporation and was also a partner at a marketing consulting firm where he provided data-driven strategic counsel to clients across the entire range of marketing topics. Current Role & Responsibilities: He currently heads the data science and AI group at Tiger Analytics, with a focus on delivering client impact through developing solutions by harnessing algorithms, platforms and data at scale. He also manages data science capability development, R&D and product development. Significant Achievements: Krishnamurthy has built and led data science teams to provide solutions to problems across a wide range of topics such as pricing, portfolio optimization, forecasting, fraud and compliance, sensor-based preventive maintenance, marketing spend optimization, demand modelling. He has also co-authored articles in leading journals on topics related to demand modelling, econometrics, consumer welfare and planning. Education: He has a masters from The University of Texas at Austin and an undergraduate degree from IIT, Delhi. Sunil Vuppala Analytics & Data Science Journey: Dr Sunil has 15 years of industrial and research experience in machine learning, deep learning, analytics, automation, IoT, healthcare and smart grid. He has worked with companies like Philips research where he contributed to explainable deep learning, inference engine for Health Suite platform. He was also one of the key architects of Infosys Mana (Nia) Platform, a knowledge-based AI platform. He has also helped agri-business client to improve efficiencies by 40%. Dr Vuppala also contributes to the academic community as visiting/adjunct faculty in top institutes of India. Current Role & Responsibilities: Dr Sunil is working as Director of data science in Ericsson GAIA. He currently leads a team of over 20 data scientists and data engineers to solve problems in the telecom domain. These problems range from visual intelligence of drone images to predict issues with root causes from TBs of data for large telco operators in the world. Significant Achievements: Dr Vuppala has 20 patents to his credit with 6 patents granted and 14 pending in the US, Europe and India. With over 30 papers in journals and international conferences, he is also an elected fellow of IETE for his contributions in analytics, IoT and AI. He was the recipient of Infosys awards of excellence in innovation category in 2016 and has delivered more than 100 talks to spread the knowledge of AI and analytics in various forums. Education: Vuppala has M.Tech from IIT Roorkee (Thesis at Macquarie University, Sydney, Australia), PhD from IIIT Bangalore and SMP from IIM Ahmedabad.
|
Each year, Analytics India Magazine publishes the annual list of the most prominent data scientists in India. This list recognises the data science professionals who have had an exceptional journey in the domain and have contributed unique innovations and unparalleled accomplishments. 2019 is the fifth year of the industry-acclaimed list and we have 10 names […]
|
["AI Features"]
|
["Data scientists India", "HPC Data Management Software", "hpc data management system", "managing hpc data", "retail bi prescriptive"]
|
Srishti Deoras
|
2019-10-08T10:42:54
|
2019
| 2,301
|
["data science", "hpc data management system", "machine learning", "AI", "retail bi prescriptive", "Data scientists India", "ML", "HPC Data Management Software", "computer vision", "NLP", "RAG", "Aim", "deep learning", "analytics", "managing hpc data"]
|
["AI", "machine learning", "ML", "deep learning", "NLP", "computer vision", "data science", "analytics", "Aim", "RAG"]
|
https://analyticsindiamag.com/ai-features/top-10-data-scientists-in-india-2019/
| 4
| 10
| 5
| true
| false
| true
|
63,762
|
Right Attitude Is Vital To Succeed As A Data Scientist, Says This Head Of Data Science
|
Analytics India got in touch with Ravi Kaushik, VP and Head of Data Science at Near, for our weekly column My Journey In Data Science. He has over seven-plus years of professional work experience in the data science domain. Ravi shared how he fell in love with data science and what practices he adopted to succeed in the competitive field. The Onset Ravi completed his instrumentation engineering in 2003 from MS Ramaiah Institute of Technology and then started his masters in 2004 at The City College of New York in Electrical Engineering. “Passionate about automation, I was working on various robotics projects. And eventually got the research funding to continue the research at City University of New York Graduate Center in PhD,” says Ravi. However, my PhD was in computer science. While dealing with robotics, one needs a wide range of expertise — computer science, electrical, and mechanical engineering. My PhD in computer science further enabled me to build and research in the robotics landscape,” he added. Ravi’s journey in analytics started when he was exposed to data structures, theoretical computer science, and machine learning in his first semester of PhD. Then he took advanced machine learning subjects in the following year. He used Matlab and C++ for his research, Java for visualisation, and C for embedded systems. Python was not popular back then, and he only started with it in the latter part of his PhD. During his PhD, Ravi learned about neural networks and advanced machine learning, however, he was influenced by Yann LeCun, who once demonstrated an object detection model at the university. Ravi did not know that data science would proliferate, but he was interested in it due to its use case in robotics and automation. “I headed in the direction of automation and ended up being a data scientist,” said Ravi. First Data Science Job Even after having a PhD, Ravi struggled to land his first data science job since it was just after the 2008 recession. He failed in more than 50 interviews but never gave up. Eventually, Ravi got a job in American Express just before his student visa was about to end. There he worked on numerous projects, but while talking about one of his best projects, Ravi believes that his contribution to fraud detection in credits was something he cherished as the project was ahead of its time. Besides, Ravi leveraged big data tools and analysed a colossal amount of information at American Express. While working there, he kept enhancing his skills by reading books, research papers and learning from colleagues. Even today, Ravi thinks aspirants should keep a popular book — The Elements of Statistical Learning and Deep Learning by Ian Goodfellow — in handy for mastering techniques. Blogs help in obtaining an overall idea, but books can give you an in-depth understanding of techniques. Current Job Experience After more than five and half years at American Express, Ravi joined one of his colleagues from American Express for a startup, Corridor Platforms, as he wanted to come back to India. However, Ravi later joined Near after almost one and a half years at the startup. Today, as a VP and Head of Data Science, Ravi manages teams of data scientists. He built the data science team at Near from the ground up by hiring proficient data scientists who could culturally fit for achieving shared goals. “No matter how brilliant you are or how much you have read, it becomes useless if one does not have the right attitude. It is completely fine to make mistakes, but one has to be humble enough to keep trying and succeeding,” says Ravi. For one, the firm hired a brilliant aspirant from Stanford, but he failed to deliver due to lack of right attitude. Apart from attitude, he looks at applicants academic achievements and tries to assess them on their understanding of the basics in data science. Advice For Aspirants For aspirants, Ravi said that getting the foundation right before building a layer of machine learning techniques is vital to succeeding in data science. However, aspirants should go one full stretch and then visit again to go in-depth in any one of the specialisations in the data science domain. Along with going deep dive into some speciality, having a complete idea of the entire data science field is also essential. Besides, having a good portfolio of projects is equally necessary for showcasing your expertise in the data science landscape.
|
Analytics India got in touch with Ravi Kaushik, VP and Head of Data Science at Near, for our weekly column My Journey In Data Science. He has over seven-plus years of professional work experience in the data science domain. Ravi shared how he fell in love with data science and what practices he adopted to […]
|
["AI Features"]
|
["Interviews and Discussions", "what is data science"]
|
Rohit Yadav
|
2020-04-27T19:00:00
|
2020
| 745
|
["data science", "machine learning", "AI", "neural network", "RAG", "Python", "deep learning", "object detection", "analytics", "what is data science", "fraud detection", "Interviews and Discussions"]
|
["AI", "machine learning", "deep learning", "neural network", "data science", "analytics", "RAG", "fraud detection", "object detection", "Python"]
|
https://analyticsindiamag.com/ai-features/right-attitude-is-vital-to-succeed-as-a-data-scientist-says-this-head-of-data-science/
| 3
| 10
| 1
| false
| false
| true
|
35,196
|
Top TED Talks on AI And Machine Learning: 2019 Edition
|
Image source: Ted.com Since its conceptualisation in 1983, TED Talks have been the go-to platform for people from all walks of life to share ideas and thoughts. Over the last three decades, the platform has witnessed some of the finest speakers capture the imagination of their audience with absolute exuberance. In this article, Analytics India Magazine takes a look at some of the most interesting talks that revolve around emerging tech like artificial intelligence and machine learning. How do we learn to work with intelligent machines? Published in November 2018, Matt Beane, assistant professor of Technology Management at the University of California, addresses the most common fear associated with AI — machines taking over human jobs. Beane, however, challenges this notion and says that instead of handling the technology carelessly and letting it be a hindrance for getting newer jobs, the potential of the technology can be used in such a way that machine enhanced mentorship can” take full advantage of AI’s amazing capabilities while enhancing our skills at the same time.” How AI can save our humanity One of the world’s noted AI experts, Kai-Fu Lee, is a venture capitalist, technology executive and writer. In this talk, Lee discusses the ethical aspects associated with AI and discusses the AI revolution that is currently unfolding in the US and China. Throughout his speech, Lee highlights the importance of thriving in a world by harnessing compassion and creativity. “AI is serendipity. It is here to liberate us from routine jobs, and it is here to remind us what it is that makes us human,” Lee points out. How to get empowered, not overpowered, by AI As a noted physicist and cosmologist, Max Tegmark’s work towards analysing the potential harm of AI and ML has been very popular. As the title of the speech suggests, Tegmark attempts to separate the fact from figures associated with AI myths and fear. Through his speech, Tegmark describes the concrete steps “we should take today to ensure that AI ends up being the best, rather than worst thing to ever happen to humanity.” What is the meaning of work? Venture capitalist, Roy Bahat and TV producer and investigative journalist, Bryn Freedman, discuss the much-debated topic of securing jobs in a more AI-relevant future. “When we spoke to people there were two themes that came out loud and clear and that is, people are less looking for more money or get out of the fear of robots taking their job. Rather they want something that is very stable and predictable. And the second thing that they said was they want was dignity, that concept of self-worth through work emerged again and again in our work,” Bahat points out. Why fascism is so tempting — and how your data could power it As countries across the world turn to divisive politics, acclaimed writer Yuval Noah Harari, author of Sapiens: A Brief History of Humankind (Harvill Secker, 2014) and Homo Deus (Harper, 2017), manages to capture the attention of his audience with his almost science-fiction-like speech. With fascism challenging governance, the author talks about how the consolidation of data would affect democracy. “The enemies of liberal democracy hack our feelings of fear and hate and vanity, and then use these feelings to polarize and destroy. It is the responsibility of all of us to get to know our weaknesses and make sure they don’t become weapons, ” he says in his speech, while appearing as a live hologram from Tel Aviv. How to be “Team Human” in the digital future Douglas Rushkoff, an American media theorist, writer, columnist and lecturer briefs about the potential harms that human can afflict upon themselves as we move towards a world dominated by technology. “Join Team Human. Find others. Together let’s make the future that we always wanted,” he says. Fake videos of real people — and how to spot them Misinformation and the propagation of fake news and videos continue to challenge governments across the world. So if a video about Barak Obama and Donald Trump surfaces in the internet saying some of the oddest things, don’t be surprised, because now algorithms can mimic them as well. Computer scientist Supasorn Suwajanakorn, in this speech, demonstrates how he used AI and 3D modelling to create photo realistic fake videos of people synced to audio. How our brains will keep up with AI Bruno Michael, a Distinguished Researcher and a member of the US National Academy of Engineering, talks about how with physical activities and by positive reinforcement, human beings can stay abreast of an AI system. In his speech, Michael shares an optimistic view and tell his viewers not to be afraid of the potential of emerging technologies. Can we protect AI from our biases? One of the biggest challenge ascribed to AI systems is the biase in algorithms which is known to have produced sexist and racist outputs. In her speech, documentary filmmaker, Robin Hauser analysis the potential capabilities of AI systems to produce biased outcomes as human beings themselves are prone to biased assumption consciously or unconsciously. “We need to figure this out now. Because once skewed data gets into deep learning machines, it’s very difficult to take it out,” Hauser says.
|
Since its conceptualisation in 1983, TED Talks have been the go-to platform for people from all walks of life to share ideas and thoughts. Over the last three decades, the platform has witnessed some of the finest speakers capture the imagination of their audience with absolute exuberance. In this article, Analytics India Magazine takes a […]
|
["AI Trends"]
|
["AI (Artificial Intelligence)", "Machine Learning", "TED Talks"]
|
Akshaya Asokan
|
2019-02-20T12:33:22
|
2019
| 871
|
["Go", "artificial intelligence", "machine learning", "TPU", "AI", "ML", "Machine Learning", "TED Talks", "Aim", "deep learning", "analytics", "R", "AI (Artificial Intelligence)"]
|
["AI", "artificial intelligence", "machine learning", "ML", "deep learning", "analytics", "Aim", "TPU", "R", "Go"]
|
https://analyticsindiamag.com/ai-trends/top-ted-talks-on-ai-and-machine-learning-2019-edition/
| 3
| 10
| 0
| false
| true
| true
|
10,003,190
|
Power Plant Energy Output Prediction: Weekend Hackathon #13
|
Weekend hackathons are fun, aren’t they! In our last weekend hackathon, we introduced a new and unique problem statement using UCI open dataset. But, we were big-time disappointed as some of the participants ended up probing the leaderboard. However, we decided to host an open UCI dataset competition again this weekend. So In this weekend hackathon, we have trained a machine learning model to perturb the target column instead of manually adding the noise. Yes, you heard it right, In this hackathon, we are challenging all the MachineHackers to capture our leaderboard and prove their mettle by competing against MachineHack’s AI. The challenge will start on July 24th Friday at 6 pm IST. Click here to Participate Problem Statement & Description The dataset was collected from a Combined Cycle Power Plant over 6 years (2006-2011) when the power plant was set to work with a full load. Features consist of hourly average ambient variables Temperature (T), Ambient Pressure (AP), Relative Humidity (RH), and Exhaust Vacuum (V) to predict the net hourly electrical energy output (PE) of the plant.A combined-cycle power plant (CCPP) is composed of gas turbines (GT), steam turbines (ST), and heat recovery steam generators. In a CCPP, the electricity is generated by gas and steam turbines, which are combined in one cycle, and is transferred from one turbine to another. While the Vacuum is collected from and has an effect on the Steam Turbine, the other three of the ambient variables affect the GT performance.. Given are 5 distinguishing factors that can predict the electrical energy output. Your objective as a data scientist is to build a machine learning model that can accurately predict the electrical energy output from various attributes. Data Description:- The unzipped folder will have the following files. Train.csv – 9568 rows x 5 columnsTest.csv – 38272 rows x 4 columnsSample Submission – Sample format for the submission. Target Variable: PE (electrical energy output) The datasets will be made available for download on July 24th, Friday at 6 pm IST Below are the file formats for the provided data Train.csv Test.csv Sample_Submission.xlsx Click here to Participate Bounties The top 3 competitors will receive a free pass to the Computer Vision DevCon 2020 Know more about the Computer Vision DevCon 2020. This hackathon and the bounty will expire on July 27th, Monday at 7 am IST Rules One account per participant. Submissions from multiple accounts will lead to disqualificationThe submission limit for the hackathon is 10 per day after which the submission will not be evaluatedAll registered participants are eligible to compete in the hackathonThis competition counts towards your overall ranking pointsYou will not be able to submit once you click the “Complete Hackathon” button. You may ignore this featureWe ask that you respect the spirit of the competition and do not cheatThis hackathon will expire on 27th July, Monday at 7 am ISTUse of any external dataset is prohibited and doing so will lead to disqualification Evaluation The leaderboard is evaluated using Root Mean Squared Error for the participant’s submission. Click here to Participate
|
Weekend hackathons are fun, aren’t they! In our last weekend hackathon, we introduced a new and unique problem statement using UCI open dataset. But, we were big-time disappointed as some of the participants ended up probing the leaderboard. However, we decided to host an open UCI dataset competition again this weekend. So In this weekend […]
|
["Deep Tech"]
|
["datascience", "datascience hackathons", "Hackathon", "Machine learning hackathon", "machine learning hackathons", "Machinehack", "Weekend Hackathon"]
|
Anurag Upadhyaya
|
2020-07-24T17:25:00
|
2020
| 509
|
["machine learning", "Weekend Hackathon", "TPU", "Machinehack", "Machine learning hackathon", "datascience", "AI", "programming_languages:R", "computer vision", "RAG", "Hackathon", "ai_applications:computer vision", "R", "datascience hackathons", "machine learning hackathons"]
|
["AI", "machine learning", "computer vision", "RAG", "TPU", "R", "programming_languages:R", "ai_applications:computer vision"]
|
https://analyticsindiamag.com/deep-tech/power-plant-energy-output-prediction-weekend-hackathon-13/
| 3
| 8
| 0
| false
| false
| false
|
10,072,649
|
Google Opposes Facebook-Backed Proposal for Self-Regulatory Body in India
|
Google expressed concerns about a self-regulatory body for social media in India. Facebook and Twitter have shown support for the proposal for developing the self-regulatory body whose primary function would consist of hearing user complaints. In June 2022, the Indian government proposed the appointment of a government panel to hear complaints about content moderation decisions from users. They also announced that companies can appoint their own self-regulatory body if the industry accepts. The proposal for the government panel closed for publication after early July 2022 though the implementation has no fixed date. An initial draft of the proposal recommended the appointment of a retired judge or an experienced technology expert along with six other senior executives at social media companies. Facebook and Twitter have shown interest in forming a self-regulatory body, but it seems highly unlikely considering the lack of consensus among the tech giants. Consequently, a government panel to oversee the whole industry will be formed. Alphabet Inc’s Google has reservations about allowing a self-regulatory body as it would allow external reviews of decisions and force changes in content—violating Google’s internal policies. Sources from Google also expressed concerns regarding the directives from a self-regulatory body that might set a dangerous precedent. Snap Inc and ShareChat also attended the meeting and voiced concerns about the proposal stating it requires more consultation with its users and the civil society. Facebook, Twitter, and Google have faced backlash for blocking various Indian influences and have thus been at odds with the Indian government. In contrast, the Indian government has expressed concerns about users being upset over their content being taken down and having only legal recourse to voice their concerns. Google’s Youtube removed 1.2 million videos from their server in the first quarter of the year citing violations of its guidelines. U.S. industry groups of the tech giants also pointed out that the government-run panel raises concern about how independent the decision making would be.
|
The Indian government proposed the appointment of a government panel to hear complaints about content moderation decisions from users.
|
["AI News"]
|
["ban", "Facebook", "Google", "Indian government", "Twitter (X)"]
|
Mohit Pandey
|
2022-08-11T16:44:04
|
2022
| 322
|
["Go", "programming_languages:R", "AI", "programming_languages:Go", "Indian government", "Google", "Facebook", "ban", "Twitter (X)", "R"]
|
["AI", "R", "Go", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-news-updates/google-opposes-facebook-backed-proposal-for-self-regulatory-body-in-india/
| 2
| 5
| 1
| true
| true
| false
|
10,052,004
|
Bangalore, Delhi, Mumbai, Top 3 Cities With AI Jobs
|
Analytics India Magazine (AIM), in association with T. A. Pai Management Institute (TAPMI), has come out with the report titled, “State Of Artificial Intelligence In India 2021.” AIMResearch has researched the open jobs across the AI segment and come out with the findings based on the following: City-wise Distribution of Open Jobs Salary-wise Distribution of Open JobsExperience-wise Distribution of Open Jobs According to the city-by-city distribution of open jobs, Bengaluru has the greatest proportion at 29.9%. (against 28.6% last year), followed by Delhi (NCR) with 17.9% and Mumbai with 11.8% of advertised jobs. Image: AIMResearch IT organisations, technology firms, and start-ups – all have large operations in Bengaluru, and as a result, the city has the largest percentage of advertised employment. Similarly, Delhi is the hub of ITES firms, while Mumbai is dominated by captive banking, domestic BFSI and other domestic firms. Considering the salary-wise distribution of open jobs, the study reveals that the 6-10 lakhs salary bracket offered for jobs has the maximum proportion of open jobs. This is followed by the salary level of 10-15 Lakhs, then mid-senior salary level, and mid-junior salary – 3-6 Lakhs. Image: AIMResearch The experience-wise distribution of open jobs reveals that the greatest proportion of jobs has been advertised for the experience level of 5 to 7 years. This is followed by the proportion of the mid-senior level of jobs for the experience level of 7-10 years. Therefore, it can be said that professionals with more than two years of experience hold a greater chance to grab job openings in the market. However, the research also shows a silver lining for the new entrants in the market as the entry experience levels of 0-1 years and 1-2 years have a distribution of 7.5% (v/s 5.7% last year) and 9.2% (v/s 7% last year) respectively. Image: AIMResearch The study is carried out on the artificial intelligence market to understand the developments of the AI market in India, market size, and job opportunities in the domain. Moreover, the study delves into the market size of the different categories of AI and analytics start-ups/boutique firms. One can read the entire report here.
|
The research has a silver lining for the new entrants in the market as the entry experience levels of 0-1 years and 1-2 years have a distribution of 7.5% (v/s 5.7% last year) and 9.2% (v/s 7% last year) open jobs respectively.
|
["AI Trends"]
|
["AIM Research"]
|
kumar Gandharv
|
2021-10-20T18:00:00
|
2021
| 356
|
["Go", "artificial intelligence", "programming_languages:R", "AI", "AIM Research", "programming_languages:Go", "Aim", "analytics", "GAN", "R"]
|
["AI", "artificial intelligence", "analytics", "Aim", "R", "Go", "GAN", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-trends/bangalore-delhi-mumbai-top-3-cities-with-ai-jobs/
| 3
| 9
| 1
| false
| false
| true
|
719
|
Silvan launches wireless home automation products for Indian Homes
|
Silvan, the pioneers of sensible home automation products and solutions in India, launched the new series of retrofit wireless control modules – SIRUS, SIREL, SANSA and LUMOS LITE – at the Electrical Building Technology India (EBTI), 2016. The new products are aimed at making smart yet simple IoT enabled homes for convenience, security and controls of electrical fittings & fixture. The products are ‘Made in India’ and specifically customised for Indian homes and scenarios. In its latest series of innovations, Silvan has also integrated voice command interface through Amazon Echo, which means that you can now talk to your home by giving a command using the wake word “Alexa” to do certain things within your house. Silvan believes that this has potential to extend the reach of home automation to a wider mass as voice control is more intuitive than app control and since Amazon Echo is the next big thing in Home Automation. Voice control is one of the many things Silvan is working towards. The company is committing for an ‘app-less’ strategy, wherein it will enable most of home control functions to be done without the need to use an app. Though app is a good way to use automation, Silvan believes that real mass adoption will happen only when it reaches a level of simplicity where the system just sits in the background and does its job, without needing much intervention from home owners. Also, simplicity being Silvan’s focal approach to consumer-centric products, the company believes that integrating voice control to its products will make consumers’ experience with Silvan’s products more easy and uncomplicated. Voice control simplifies the interface of home automation system, making it more acceptable by its consumers. It is the most futuristic product that you can own in this age and time. Over and above these new consumer products, Silvan has launched its first five consumer products – CBELL, SECURE, CUBO, LUMOS, and zPLY – in June 2016. These previously launched products address variety of life needs- entrance management, security, comfort & convenience, and entertainment. Speaking about the products Silvan has to offer, Avinash Gautam, CEO, Silvan, said, “Designed and Made for the Indian Home, that’s what Silvan‘s new products are all about. Today, everything is wireless and are operated remotely. Hence, the new products will no doubt do exceptionally well in the market because we are offering what the next-gen home requires. Our products are designed to work self-reliantly without depending on each other, but, it can at the same time, be stitched together as an integrated system when more than one of the items are bought. In fact, product like ours which are energy efficient, Security & Safety products are the most in demand.” Silvan Innovation Labs has also partnered with Samarth, a service provider for senior citizens, to provide elderly care solutions. Silvan will be providing high-tech support to ensure safety and security of the senior citizens. About the Products SIRUS is a wireless retrofit Air Conditioner (AC) control module that enables convenient and easy use of your AC from anywhere. Using your home Wi-Fi, the product makes your AC a true IoT device. SIRUS can fit inside the flap of the split AC unit, thus affecting the aesthetic of the home. SIREL is a wireless retrofit programmable relay control module that can help regulate curtain motors, gate control, wired and wireless digital locks, hooter, sprinkler systems, etc. SANSA is a stylish smart wireless smart switch that can act as a programmable scene controller, or can drive loads like lights and fans directly. The product has the option of Infrared remote control, and can fit into a 2-modular electric box. SANSA connects to CUBO using your home Wi-Fi network. SIRUS and SIREL are Android and IOS compatible, and gives a warm and welcoming home, suitable for all age groups which enables the use of an app as well as the traditional method. LUMOS LITE is a two-channel wireless light control module which can be installed behind the switches in standard electrical modular boxes. It controls two loads independently using your home Wi-Fi network.
|
Silvan, the pioneers of sensible home automation products and solutions in India, launched the new series of retrofit wireless control modules – SIRUS, SIREL, SANSA and LUMOS LITE – at the Electrical Building Technology India (EBTI), 2016. The new products are aimed at making smart yet simple IoT enabled homes for convenience, security and controls […]
|
["AI News"]
|
["Home Automation", "IoT"]
|
Manisha Salecha
|
2016-11-03T05:55:18
|
2016
| 682
|
["Home Automation", "Go", "programming_languages:R", "AI", "innovation", "programming_languages:Go", "Git", "automation", "Aim", "R", "IoT"]
|
["AI", "Aim", "R", "Go", "Git", "automation", "innovation", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-news-updates/silvan-launches-wireless-home-automation-products-indian-homes/
| 2
| 9
| 1
| true
| false
| false
|
10,139,061
|
The Transformative Impact of Generative AI on IT Services, BPO, Software, and Healthcare
|
Technology Holdings, an award-winning global boutique investment banking firm dedicated to delivering M&A and capital-raising advisory services to technology services, software, consulting, healthcare life sciences, and business process management companies globally, recently launched its report titled “What Does GenAI REALLY Mean for IT Services, BPO, and Software Companies: A US $549 Billion Opportunity or Threat?” “As many as 91% of the respondents believe that GenAI will significantly boost employee productivity, and 82% see enhanced customer experiences through GenAI integration,” said Venkatesh Mahale, Senior Research Manager at Technology Holdings, while speaking at Cypher 2024. He added that in the BPO sector, GenAI is expected to have the biggest impact, particularly in areas such as automation and advanced analytics. Speaking about the impact of generative AI in the IT sector, Sriharsha KV, Associate Director at Technology Holdings, said, “IT services today generate approximately one-and-a-half trillion dollars in revenue, a figure expected to double in the next eight to ten years.” He added that Accenture, the number one IT services company in the world, has started disclosing GenAI revenues, and their pipeline is already at a half-billion run rate for the year. “The pipeline has scaled from a few hundred million last year to, I would say, 300 to 400%. That makes us strongly believe that GenAI is real.” He noted that data centre and chip companies are part of the upstream sectors, as they are responsible for creating the generative AI infrastructure. In contrast, IT services companies are downstream but are gaining momentum in automating building processes using GenAI. Sriharsha stated that generative AI has a notable impact on testing, debugging, DevOps, MLOps, and DataOps. M&A Trends in IT Services and BPO The panel at Cypher further discussed the growing trends in mergers and acquisitions (M&A) driven by GenAI. “2023 was a blockbuster year for funding in GenAI, with $20 to $25 billion infused into the sector,” Sriharsha said. This surge in investment has also translated into increased M&A activity, particularly in the IT services and BPO sectors. “We’ve seen numerous acquisitions focused on integrating GenAI capabilities into industry-specific operations,” he added. Sriharsha explained that in the BPO sector, GenAI is particularly disrupting contact centres. “By automating up to 70% of calls through a combination of chat, email, and voice interactions, companies can operate with fewer agents while maintaining service quality,” he said. This efficiency allows organisations to redirect resources to higher-value tasks, reshaping the way BPOs operate. Enhancing Healthcare with GenAI “India has a population of around 1.4 billion, but there is still a dearth of doctors and nurses,” said Anant Kharad, Vice President at TH Healthcare & Life Sciences. He added that generative AI has several use cases in the healthcare industry that can help solve these problems. “GenAI will analyse my medical records and try to identify the issues I faced in the past and what I’m experiencing now. It will create a summary of all that and then provide it to the nurse for review, who will handle the initial treatment for the outpatient department. The doctor can then take it from there instead of nurses going through tons of paperwork,” he explained. He said that this not only enhances patient care but also optimises healthcare workflows, allowing medical staff to focus on more complex cases. Moreover, he added that GenAI is playing a vital role in drug discovery and patient care strategies. “It is working with companies that reverse Type 2 diabetes,” Kharad shared. “It has used machine learning to analyse data from thousands of patients, creating effective treatment curricula that can be rolled out globally,” he said. The Long-Term Implications of Generative AI As companies navigate the potential disruptions brought on by generative AI, the long-term impacts on business models and service offerings cannot be overlooked. According to Kharad, the need for traditional models, like manual contact centres, is already being questioned in the BPO sector. “Testing and debugging in IT services are also being challenged,” he said, suggesting that companies must evolve or risk obsolescence. The healthcare sector, however, appears poised for positive disruption through the application of generative AI. Kharad shared specific examples of how AI can enhance efficiency, especially in diagnostics. “For instance, instead of a radiologist reading 20 reports a day, AI could enable them to process 100 reports,” he explained. This not only increases operational efficiency but also optimises resource allocation in a sector often constrained by staff shortages. Furthermore, Kharad pointed out that major players like Amazon are already using generative AI to automate prescription orders based on data inputs. “If AI can handle 90% of the workload, it will reduce costs and provide faster service for patients,” he said. Kharad further elaborated on the healthcare sector’s response to M&A trends, noting that biotech and health-tech companies are at the forefront. “Pharmaceutical companies in India are partnering with start-ups to drive innovation in drug discovery,” he said. For those interested in exploring the implications of generative AI further, Technology Holdings has launched a comprehensive report on its impact on IT services, BPOs, and software companies. The report can be accessed here.
|
“As many as 91% of the respondents believe that GenAI will significantly boost employee productivity, and 82% see enhanced customer experiences through GenAI integration,” said the Technology Holdings panel while speaking at Cypher 2024, India’s biggest AI conference organised by AIM Media House.
|
["AI Highlights"]
|
["AI Healthcare", "Generative AI"]
|
Siddharth Jindal
|
2024-10-22T13:21:29
|
2024
| 851
|
["Go", "GenAI", "machine learning", "AI", "AI Healthcare", "ML", "MLOps", "analytics", "generative AI", "Generative AI", "DevOps", "R"]
|
["AI", "machine learning", "ML", "analytics", "generative AI", "GenAI", "MLOps", "R", "Go", "DevOps"]
|
https://analyticsindiamag.com/ai-highlights/the-transformative-impact-of-generative-ai-on-it-services-bpo-software-and-healthcare/
| 3
| 10
| 2
| false
| true
| true
|
50,691
|
Centre Plans To Take AI Into Schools Through CodeIndia
|
As artificial intelligence is taking centre stage, the government of India is trying to empower school children with application-based two-week training modules named CodeIndia. The idea behind this program is to teach mid and intermediate level students across the country and make them market-ready. Through CodeIndia, students will acquire enough knowledge to develop a necessary aptitude for developing applications for several sectors such as aerospace, nuclear physics, among others. This is a must need a program from the government as in the current technology landscape, there is a dearth of talented software developers. Many organisations are trying to get rid of the employees who are incapable of managing tasks that use the latest technologies. Notably, in India, skill gaps have impeded the developments in various technologies, this, in turn, has negative impacts on the growth of the country. Instead of leaving the upskilling task for organisations, the government has righteously forged towards equipping students at the nascent stage. This will enable students to blaze their trail and make new advancement in several technologies. Besides, the module is not just any other courses that education tech startups provide. The module will be taught by specialists, which will include fundamentals as well as intermediate skills. Students will be given a chance to interact and resolve their doubts with experts from Massachusetts Institute of Technology (MIT), Stanford University, and other similar prominent institutes will be included in the program to deliver superior learning experience. To make CodeIndia accessible to every student, it will be offered in various languages other than English. This will allow students from diverse regions to learn and make their mark in the technology marketplace. In the future, the CodeIndia program will also lay the foundation to devise a curriculum by the human resource department for integrating it with the regular courses in schools.
|
As artificial intelligence is taking centre stage, the government of India is trying to empower school children with application-based two-week training modules named CodeIndia. The idea behind this program is to teach mid and intermediate level students across the country and make them market-ready. Through CodeIndia, students will acquire enough knowledge to develop a necessary […]
|
["AI News"]
|
["schools"]
|
Rohit Yadav
|
2019-11-26T16:13:08
|
2019
| 304
|
["Go", "artificial intelligence", "schools", "programming_languages:R", "AI", "programming_languages:Go", "GAN", "R", "startup"]
|
["AI", "artificial intelligence", "R", "Go", "GAN", "startup", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-news-updates/center-plans-to-take-ai-into-the-schools-through-codeindia/
| 2
| 8
| 2
| false
| false
| false
|
10,088,536
|
NVIDIA’s Rival Raises a New Round of Funding, Reaches Valuation of Over $400 Mn
|
South Korean artificial intelligence chip-making startup ‘Sapeon‘ is raising a funding round that puts its valuation above $400 million. Headquartered in California, Sapeon enables “next-generation AI computing” and solutions. The company designs AI semiconductors for data centres and AI chips for applications. The $400 million valuation will give the much-needed impetus to take on its biggest rival, NVIDIA, the market giants of semiconductors. Sapeon’s parent company, SK Telecom Communications, is one of South Korea’s biggest telecommunication operators. CEO of Sapeon, Soojung Ryu, said that the AI semiconductor market is set to cross over $100 billion by 2026. She also believes that with the evolution of AI services like ChatGPT, AI solutions will grow. Sapeon has been using their chips in their own “neural processing unit farms” which are computer systems for AI applications. They have also sold their services to cloud service provider, NHN Cloud , and semiconductor company, SK Hynix. Since its inception in 2022, Sapeon’s market value has grown from 80 billion won to 500 billion won (US $380 billion) as of today. Sapeon has an X220 chip in the market which is built on 28 nanometer technology. The company is currently working towards launching a 7-nanometer AI chip which will be manufactured by Taiwan Semiconductor Manufacturing Company Limited, the biggest contract chipmaker in the world. South Korea’s Emerging AI Market With AI chatbots gaining popularity, the demand for AI chips is on an upward trajectory. South Korea is heavily investing in capturing the AI market and Sapeon is not the first Korean company to enter the AI race. South Korean startup, Rebellions Inc., launched an ATOM AI chip last month. The ATOM chip is said to be superior to its rival NVIDIA’s A100 chip by consuming only 20% of power when compared to the latter. However, NVIDIA continues to be a leader in the semiconductor space with OpenAI’s ChatGPT fuelling its growth. It is estimated that over 30,000 NVIDIA GPUs will be required to power ChatGPT.
|
South Korean AI chip startup raises million dollars funding. To take on NVIDIA in the AI chip race.
|
["AI News"]
|
["AI chip", "GPU", "NVIDIA", "Semiconductor India", "south korea"]
|
Vandana Nair
|
2023-03-02T15:24:54
|
2023
| 329
|
["ChatGPT", "funding", "artificial intelligence", "AI chip", "OpenAI", "AI", "chatbots", "startup", "A100", "GPU", "GPT", "Semiconductor India", "NVIDIA", "R", "south korea"]
|
["AI", "artificial intelligence", "ChatGPT", "OpenAI", "chatbots", "R", "GPT", "startup", "funding", "A100"]
|
https://analyticsindiamag.com/ai-news-updates/nvidias-rival-raises-a-new-round-of-funding-reaches-valuation-of-over-400-mn/
| 4
| 10
| 2
| false
| false
| false
|
63,350
|
How Can Data Science-as-a-Service Help Your Organization?
|
If your business is struggling to reduce operational costs during the ongoing economic crisis or maintain the efficiency of services or the quality products, then Data Science as a Service (DSaaS) should be used to solve these issues. DSaaS is an ideal choice for businesses to manage without a large team of data scientists and analysts in-house. It provides companies access to analytics resources for particular data science demands without much expense on building such teams from scratch. Companies gain advantages based on their capability to cause data-driven decisions more efficiently and faster than their opponents. Data solely gives limited value to companies without the expertise, tools, and knowledge to comprehend what questions to ask, how to reveal the right patterns, and the skills to make forecasts that point to profitable action. Data Science As A Service: How Does It Work? DSaaS is mostly a cloud-based delivery model, where different tools for data analytics are provided and can be configured by the user to process and analyse enormous quantities of heterogeneous data efficiently. Customers will serve their enterprise data into the platform and get back more valuable analytics insights. These analytic insights are produced by analytical apps, which harmonise analytic data workflows. The workflows are created using a collection of services that perform analytical algorithms. Once the clients upload the data to the platform or cloud database, the data scientist as a service platform can be incorporated with data engineers who will work on the uploaded data. There are mostly subscription-based models. There are multiple data science consulting firms, startups and even bigger cloud platforms which provide data science as a service offering in varied forms. As a part of DSaaS, meticulous delivery of production-ready predictive models, and data analysis can be generated using mature methodologies. Examples Such offerings included high-quality and complex analytics solutions which will turn your raw data into quantifiable information, without customers having to spend money on specialised data science teams. For example, a recent partnership between Snowflake and Zepl highlighted the importance of data science as a service. Using Zepl’s new native Snowflake integration, small data science teams can rapidly explore, analyse and collaborate around Snowflake’s cloud-built data warehouse. Within minutes, Zepl makes machine learning at scale to Snowflake data across entire data science teams. Zepl’s powerful collaboration capabilities are used by data scientists, data engineers, data analysts, team managers and executives globally for data science needs. DSaaS offerings also exist for specific industry domains. For example, Cogitativo, a Berkeley, California-based company for healthcare service organisations, raised $18.5 million in Series B recently. The funding round was led by Wells Fargo Strategic Capital. Cogitativo implements a machine learning platform for healthcare performance enhancements by allowing clients to recognise and solve healthcare system complexities. Currently, about 50 healthcare companies with over 45 million members utilise the company’s solutions to control their operational and strategic challenges, and their capacity to drive marketplace complexity. Then, there are also plug-and-play data science and AI solutions which aim at providing analytical expertise underpinned by the data scientist’s expertise. But such plug-and-play machine learning and AI tools can remain enablers of analytic force, not the origin of it. For that, teams may still need data scientists to bring a variety of abilities to the task, leading among them the capability to wrangle messy data. This is where a small team of data scientists may still be needed. Amazon Kendra, an AI-enabled enterprise search tool, responds to queries by searching through a variety of data sources within a company. The search tool can be performed on websites and interfaces such as chatbots. Kendra uses deep learning models to learn from text from multiple sources and across several domains, including life sciences and legal and financial service without ML/AI experts and data scientists.
|
If your business is struggling to reduce operational costs during the ongoing economic crisis or maintain the efficiency of services or the quality products, then Data Science as a Service (DSaaS) should be used to solve these issues. DSaaS is an ideal choice for businesses to manage without a large team of data scientists and […]
|
["IT Services"]
|
["Data Science", "what is data science"]
|
Vishal Chawla
|
2020-04-28T12:00:43
|
2020
| 630
|
["data science", "machine learning", "AI", "chatbots", "ML", "Aim", "deep learning", "analytics", "Data Science", "what is data science", "R", "Snowflake"]
|
["AI", "machine learning", "ML", "deep learning", "data science", "analytics", "Aim", "chatbots", "Snowflake", "R"]
|
https://analyticsindiamag.com/it-services/how-can-data-science-as-a-service-help-your-organization/
| 3
| 10
| 6
| false
| true
| false
|
10,130,763
|
Why Mark Zuckerberg Is Selfish With Open Source
|
With the release of Llama 3.1, Mark Zuckerberg has established himself as the king of open-source AI. Contrary to popular belief, Zuckerberg has admitted that the pursuance of an open-source strategy is due to somewhat selfish reasons about the tech ecosystem, as he wants to influence how the models are developed and integrated into the social fabric. “We’re not pursuing this out of altruism, though I believe it will benefit the ecosystem. We’re doing it because we think it will enhance our offerings by creating a strong ecosystem around contributions, as seen with the PyTorch community,” said Zuckerberg at SIGGRAPH 2024. “I mean, this might sound selfish, but after building this company for a while, one of my goals for the next ten or 15 years is to ensure we can build the fundamental technology for our social experiences. There have been too many times when I’ve tried to build something, only to be told by the platform provider that it couldn’t be done,” he added. Zuckerberg does not want the AI industry to follow the path of the smartphone industry, as seen with Apple. “Because of its closed ecosystem, Apple essentially won and set the terms. Apple controls the entire market and profits, while Android has largely followed Apple. I think it’s clear that Apple won this generation,” he said. He explained that when something becomes an industry standard, other folks work starts to revolve around it. “So, all the silicon and systems will end up being optimised to run this thing really well, which will benefit everyone. But it will also work well with the system we’re building, and that’s, I think, just one example of how this ends up being really effective,” he said. Earlier this year, Meta open-sourced Horizon OS built for its AR/VR headsets. “We’re basically making the Horizon OS that we’re building for mixed reality an open operating system, similar to what Android or Windows was. We’re making it so that we can work with many different hardware companies to create various kinds of devices,” said Zuckerberg. Jensen Loves Llama NVIDIA chief Jensen Huang could not agree more with Zuckerberg. He said that using Llama 2, NVIDIA has developed fine-tuned models that assist engineers at the company. “We have an AI for chip design and another for software coding that understands USD (Universal Scene Description) because we use it for Omniverse projects. We also have an AI that understands Verilog, our hardware description language. We have an AI that manages our bug database, helps triage bugs, and directs them to the appropriate engineers. Each of these AIs is fine-tuned based on Llama,” said Huang. “We fine-tune them, we guardrail them. If we have an AI designed for chip design, we’re not interested in asking it about politics, you know, and religion and things like that,” he explained. Huang joked that an AI chip engineer is costing them just $10 an hour. Moreover, Huang said he believes the release of Llama 2 was “the biggest event in AI last year.” He explained that this was because suddenly, every company, enterprise, and industry—especially in healthcare—was building AI. Large companies, small businesses, and startups alike were all creating AIs. It provided researchers with a starting point, enabling them to re-engage with AI. And he believes that Llama 3.1 will do the same. Army of AI Agents Meta released AI Studio yesterday, a new platform where people can create, share, and discover AIs without needing technical skills. AI Studio is built on the Llama 3.1 models. It allows anyone to build and publish AI agents across Messenger, Instagram, WhatsApp, and the web. Just announced at @siggraph, today we’re beginning to roll out AI Studio, a new place for people to create, share and discover AIs — no technical skills required!AI Studio is built on our Llama 3.1 models, bringing many of the advanced capabilities and flexibility of the new… pic.twitter.com/mokxvIJqWu— Ahmad Al-Dahle (@Ahmad_Al_Dahle) July 29, 2024 Taking a dig at OpenAI, Zuckerberg said, “Some of the other companies in the industry are building one central agent. “Our vision is to empower everyone who uses our products to create their own agents. Whether it’s the millions of creators on our platform or hundreds of millions of small businesses, we aim to pull in all your content and quickly set up a business agent.” He added that this agent would interact with customers, handle sales, take care of customer support, and more. Forget Altman, SAM 2 is Here While the world is still awaiting the voice features in GPT-4o as promised by Sam Altman, Meta released another model called SAM 2. Building upon the success of its predecessor, SAM 2 introduces real-time, promptable object segmentation capabilities for both images and videos, setting a new standard in the industry. Meta Segment Anything Model v2 (SAM 2) is out.Can segment images and videos.Open source under Apache-2 license.Web demo, paper, and datasets available.Amazing performance. https://t.co/bGFeDUgZaW— Yann LeCun (@ylecun) July 30, 2024 SAM 2 is the first model to unify object segmentation across both images and videos. This means that users can now seamlessly apply the same segmentation techniques to dynamic video content as they do to static images. One of the standout features of SAM 2 is its ability to perform real-time segmentation at approximately 44 frames per second. This capability is particularly beneficial for applications that require immediate feedback, such as live video editing and interactive media. Huang said this would be particularly useful since NVIDIA is now training robots, believing that the future will be physical AI. “We’re now training AI models on video so that we can understand the world model,” said Huang. He added that they will connect these AI models to the Omniverse, allowing them to better represent the physical world and enabling robots to operate in these Omniverse worlds. On the other hand, this model would be beneficial for Meta, as the company is bullish on its Meta Ray-Ban glasses. “When we think about the next computing platform, we break it down into mixed reality, the headsets, and the smart glasses,” said Zuckerberg.
|
With the release of Llama 3.1, Mark Zuckerberg has established himself as the king of open-source AI. Contrary to popular belief, Zuckerberg has admitted that the pursuance of an open-source strategy is due to somewhat selfish reasons about the tech ecosystem, as he wants to influence how the models are developed and integrated into the […]
|
["Deep Tech"]
|
["Meta AI", "Open Source AI"]
|
Siddharth Jindal
|
2024-07-30T18:00:00
|
2024
| 1,016
|
["Go", "Meta AI", "OpenAI", "AI", "PyTorch", "GPT-4o", "ML", "Open Source AI", "Ray", "Aim", "GPT", "R"]
|
["AI", "ML", "GPT-4o", "OpenAI", "Aim", "Ray", "PyTorch", "R", "Go", "GPT"]
|
https://analyticsindiamag.com/deep-tech/why-mark-zuckerberg-is-selfish-with-open-source/
| 3
| 10
| 5
| false
| true
| true
|
10,008,169
|
Can This Tiny Language Model Defeat Gigantic GPT3?
|
While GPT-3 has been bragging about achieving state-of-the-art performance on Complex NLP tasks with hundred billion parameters, researchers from the LMU Munich, Germany have proposed a language model who can show similar achievements with way fewer parameters. GPT-3 has been trained on 175 billion parameters and thus showed remarkable few-shot abilities, and by reformulating a few tasks and prompting inputs, it also showed immense capabilities on SuperGLUE benchmark. However it comes with two most significant drawbacks — large models aren’t always feasible for real-world scenarios, and with the context window of these monstrous models is limited to a few hundred tokens, it doesn’t scale more than a few examples. And thus, the researchers proposed an alternative to priming, i.e. Pattern Exploiting Training (PET), which merges the sea of reformulating tasks with Cloze questions along with regular gradient-based fine-tuning. PET required unlabelled data, which is easier to gather than labelled data, thus making it usable for real-world applications. The most significant advantage it provides is when the outcome predicted by these large language models like GPT-3 corresponds to a single token in its vocabulary, which gets challenging for many NLP tasks. Also Read: Can GPT-3 Pass Multitask Benchmark? Pattern Exploiting Training (PET) In this study, the researchers modified the PET to predict more than one token to outperform GPT-3 on SuperGLUE with 32 training examples and only 0.1% of its parameters. The researchers showcased how PET leverages masked language models to assign probabilities to sequences of texts. To facilitate this, the researchers considered mapping the inputs to outputs for which PET required pattern-verbaliser pairs (PVPs), which consist of a pattern that maps inputs to Cloze questions containing a single mask and a verbaliser that maps each output to a single token representing tasks. Application of pattern-verbaliser pairs for recognising textual entailment: an input is converted into a cloze question for each output is derived from the probability of being a plausible choice for the masked position. The PET has to derive the probability of being the accurate output from the probability of being the correct token at the masked position. For the given task, detecting PVPs that perform well has been a challenging task with the absence of a large development set, and that’s why pattern exploiting training has been the preferred choice for enabling a combination of multiple PVPs. For this, for each PVP, a masked language model is fine-tuned on training examples, and the ensemble of fine-tuned MLMs is then used to annotate a set of unlabelled data with soft labels on probability distribution. Further, the soft-labelled dataset is leveraged for training a regular sequence classifier. While carrying this out, researchers noted that PET comes with a limitation of the verbaliser where it struggles to map each possible output to a single token for many tasks. And thus, the researchers generalised verbalisers to function which required some modification on inference and training. Also Read: OpenAI’s Not So Open GPT-3 Can Impact Its Efficacy PET vs GPT-3 on SuperGLUE For comparing the performances between GPT-3 and PET, the researchers chose SuperGLUE as a benchmark. While carrying this out, researchers noted that PET cannons are evaluated on the exact same training data as GPT-3. This is a lot because GPT-3 leverages different training data for different tasks. So to make it a level playing field, researchers created a new training data set of 32 examples, randomly selected using the fixed random seed for each task. In addition to that, researchers also developed a set of 20,000 unlabelled examples for each task, removing all the labels. And, the resulting set of examples that will be used for training and unlabelled examples as FewGLUE. To perform the tasks, the researchers used BoolQ, a QA task; CB and RTE, the text entailment tasks; COPA task; MultiRC, another QA task; ReCoRD, a Cloze question task. And as the sizable underlying model for PET, the researchers opted for ALBERT. PET is then run on the FewGLUE training set for all SuperGLUE tasks; however, for COPA, WSC and ReCoRD, the researchers proposed modification of PET. The proposed method is then trained on all tasks except COPA, ESC and ReCoRD, which simply resumed the regular results of PET. After experimenting, the results highlight that ALBERT with PET highlights similar performance as GPT-3, which is larger by a factor of 785. On average, the proposed method performs 18 points better than GPT-3. Showcasing the break up of the results, the proposed model — PET, similar to GPT-3, doesn’t perform on WiC, and only for the ReCoRD task, the GPT-3 showcased consistent performance better than PET. Also Read: 15 Interesting Ways GPT-3 Has Been Put To Use Wrapping Up With this study, the researchers showcased how it is possible to achieve a few shot performance on NLP tasks similar to GPT-3 outcome using PET. PET reformulates the tasks as Cloze questions and trains the models for different reformulation. To make this happen, the researchers modified PET to be used for tasks that require multiple tokens. Although the results highlight that the proposed method has outperformed GPT-3 on many tasks, it didn’t manage to showcase smiler results on every task given. However, such a study indeed opens up channels and opportunities for pushing AI boundaries with modest hardware.Read the whole paper here.
|
While GPT-3 has been bragging about achieving state-of-the-art performance on Complex NLP tasks with hundred billion parameters, researchers from the LMU Munich, Germany have proposed a language model who can show similar achievements with way fewer parameters. GPT-3 has been trained on 175 billion parameters and thus showed remarkable few-shot abilities, and by reformulating a […]
|
["AI Features"]
|
["GPT-3"]
|
Sejuti Das
|
2020-09-23T15:00:30
|
2020
| 881
|
["GPT-3", "TPU", "OpenAI", "AI", "ML", "RAG", "NLP", "BERT", "GPT", "R", "llm_models:GPT"]
|
["AI", "ML", "NLP", "OpenAI", "RAG", "TPU", "R", "BERT", "GPT", "llm_models:GPT"]
|
https://analyticsindiamag.com/ai-features/can-this-tiny-language-model-defeat-gigantic-gpt3/
| 3
| 10
| 0
| true
| true
| true
|
10,121,863
|
Wipro Brings ‘Parallel Reality’ to Airports
|
A couple of years ago, Delta Airlines introduced the concept of ‘Parallel Reality’ at the Detroit Metropolitan Airport. With this, they enhanced a customer’s airport experience by using a public screen to display personalised information relevant to each passenger. Now, interestingly, one of the largest IT companies in India is building a customised passenger experience on similar lines. “We are in the process of developing a comprehensive web-based mobile digital concierge that represents a significant leap forward in meeting modern travellers’ expectations,” said Wipro Limited Canada transportation cluster head and general manager Anudeep Kambhampati in an exclusive interaction with AIM. The new platform integrates journey planning, baggage tracking, and personalised recommendations that will be tailored to each passenger’s travel path. “The integration of generative AI chatbots further enriches this personalised experience, offering real-time, interactive assistance,” said Kambhampati. Wipro has conducted a few PoCs around this in different stages and is in the process of refining the experience before it is launched to its clients. Wipro vs the World While Wipro is increasingly working its way into the airline industry, with strategic partnerships with Toronto Pearson and industry bodies such as the International Air Transport Association (IATA) and Airport Council International, other Indian IT players are also in the race. IT giant Infosys also offers similar AI/ML solutions through its cloud platform Infosys Cobalt Airline Cloud where services such as optimised baggage tracking, handling, security monitoring and many more are offered. TCS has Aviana which provides smart airport solutions and engineering operations through its unified data platform. Non-IT players such as Prisma AI are also providing services through computer vision technologies at Adani airports. Interestingly, the current technology partner for Digi Yatra is IDEMIA, a French company which provides their services at the Delhi, Hyderabad and Goa airports. In addition, the Digi Yatra Foundation recently severed ties with Dataevolve, a Hyderabad-based company who served as their initial IT solutions provider, seeking to partner with Infosys and TCS. Wipro told AIM that it has been offering technological and operational services to enhance airport facilities. “Our computer vision AI technology has revolutionised passenger flow analytics, providing real-time wait times with over 90% accuracy,” said Kambhampati. The company’s in-house product, VisionEDGE, powers more than 5,000 flight information displays across 15+ airports in the US, Canada, India, and the Middle East. Wipro’s services reach over 300 million passengers per year, covering over 200 airlines across 300+ destinations, with over 15+ terminals and 25+ runways covered as well. “On the customer-facing side, we are collaborating with airports in North America and the Middle East to deploy generative AI-powered virtual assistants,” said Kambhampati. In 2022, Wipro also developed a first-of-its-kind passenger queue system that provided passengers with real-time boarding updates. A Unique Generative AI Strategy Wipro uses Azure OpenAI, AWS AI, OpsRamp, ServiceNow AI, Zensors, the Wipro AI 360 platform – the company’s holistic and AI-centric innovation ecosystem, and several other tech infrastructure products. Kambhampati highlights that generative AI’s inherent intelligence can be trained but not fully controlled. Thanks to this, Wipro recommends a “phased implementation strategy” to address inaccuracies and legal concerns by exposing AI applications to a select section of the public for feedback and benchmarking. Wipro has been making long strides in the generative AI race. With a rising demand from Wipro’s customers for AI solutions, the company is providing enterprise services by developing AI models using a generative AI framework. The company has even trained 225,000 employees in AI 101. Wipro CEO Srini Pallia had earlier said, “We focus on industry-specific offerings and business solutions led by consulting and infused with AI, and we’ll continue to build this.”
|
Wipro’s VisionEdge platform powers more than 5,000 flight information displays across 15+ airports in the US, Canada, India, and the Middle East.
|
["AI Trends"]
|
["Computer Vision", "Generative AI", "Wipro"]
|
Vandana Nair
|
2024-05-28T16:00:00
|
2024
| 605
|
["Wipro", "OpenAI", "AI", "chatbots", "AWS", "ML", "virtual assistants", "computer vision", "Aim", "analytics", "Computer Vision", "generative AI", "Generative AI"]
|
["AI", "ML", "computer vision", "analytics", "generative AI", "OpenAI", "Aim", "chatbots", "virtual assistants", "AWS"]
|
https://analyticsindiamag.com/ai-trends/wipro-brings-parallel-reality-to-airports/
| 3
| 10
| 3
| false
| false
| false
|
48,658
|
How To Do Machine Learning When Data Is Unlabelled
|
Semi-weakly supervised learning is a product of combining the merits of semi-supervised and weakly supervised learning. The goal here is to create efficient classification models. To test this, Facebook AI has used a teacher-student model training paradigm and billion-scale weakly supervised data sets. An example of a weakly supervised data set can be hashtags associated with publicly available photos. Since Instagram is rich with such data, it was chosen for performing semi-weakly supervised learning. For the experiments, the team at Facebook AI used “Semi-weakly” supervised (SWSL) ImageNet models that are pre-trained on 940 million public images with 150,000 hashtags. In this case, the associated hashtags are only used for building a better teacher model. During training the student model, those hashtags are ignored and the student model is pre-trained with a subset of images selected by the teacher model from the same 940 million public image dataset. The results show that this approach has set new benchmarks for image and video classification models. Training With Unlabeled Data via FAIR The above figure illustrates how semi-supervised training framework is used to generate lightweight image and video classification models. The training procedure is carried out as follows: A larger-capacity and highly accurate “teacher” model with all available labelled data sets are trained first. Teacher model predicts the labels and corresponding soft-max scores for all the unlabelled data For pretraining the lightweight, computationally highly efficient “student” classification model, the top-scoring examples are considered Student model with all the available labelled data is fine-tuned. However, using semi-supervised data alone won’t be sufficient to achieve a state-of-the-art result at billion scales. To improve on this model, researchers at Facebook introduced semi-weak supervision approach. Researchers used the weakly supervised teacher model to select pretraining examples from the same data set of one billion hashtagged images. To create highly accurate models, the teacher model is made to predict labels for the same weakly supervised data set of 65 million publicly available Instagram videos with which it was pre-trained. For example, consider a tail class like “African Dwarf Kingfisher” bird. One might have a hard time finding a dataset containing labelled images of this bird. There may not be a sufficient number of weakly-supervised/tagged examples. However, chances are that a lot of untagged images of this bird is likely to exist in the unlabelled dataset. As discussed above, the teacher model trained with labels is able will identify enough images from the unlabeled data and classify the right kind of bird. The teacher model obtained by pre-training on weakly-supervised data followed by fine-tuning on task-specific data has shown promising results. The student model obtained by training on the data selected by the teacher model is significantly better than the one obtained by training directly on the weakly-supervised data. This particular approach is what has led to achieving state-of-the-art results. The results show that the weakly supervised teacher model, with 24x greater capacity than the student model, provided 82.8% top-1 accuracy on the validation set. Training details: Models are trained using synchronous stochastic gradient descent (SGD) on 64GPUs across 8 machines. Each GPU processes 24 images at a time and apply batch normalisation to all convolutional layers on each GPU. The weight decay parameter is set to 0.0001 in all the experiments. For fine-tuning on ImageNet, the learning rate is set to 0.00025 over 30 epochs. Key Takeaways Semi-weakly supervised training framework has resulted in a new state-of-the-art academic benchmark for lightweight image and video classification models. It helped reduce the accuracy gap between the high-capacity state-of-the-art models and computationally efficient production-grade models. Can be used to create efficient, low-capacity production-ready models that deliver substantially higher accuracy than was previously possible. By using a very large dataset of unlabelled images via semi-supervised learning, the researchers were able to improve the quality of CNN models.
|
Semi-weakly supervised learning is a product of combining the merits of semi-supervised and weakly supervised learning. The goal here is to create efficient classification models. To test this, Facebook AI has used a teacher-student model training paradigm and billion-scale weakly supervised data sets. An example of a weakly supervised data set can be hashtags associated […]
|
["Deep Tech"]
|
["how to retrain data", "supervised learning"]
|
Ram Sagar
|
2019-10-23T13:06:08
|
2019
| 635
|
["Go", "programming_languages:R", "AI", "how to retrain data", "programming_languages:Go", "supervised learning", "CNN", "R"]
|
["AI", "R", "Go", "CNN", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/deep-tech/how-to-do-machine-learning-when-data-is-unlabelled/
| 3
| 6
| 1
| true
| true
| true
|
67,319
|
Machine Learning Behind Google Translate
|
Google Translate was launched 10 years ago. During the initial days, Google Translate was launched with Phrase-Based Machine Translation as the key algorithm. Later, Google came up with other machine learning advancements that changed the way we look at foreign languages forever. In the next section, we look at the machine learning methods used by Google for its translation services. Google Neural Machine Translation The main improvement in the translation systems was achieved with the introduction of Google Neural Machine Translation or GNMT. Its model architecture consists of an encoder network (on the left) as shown above and a decoder network on the right. In between these two, sits an attention module. A typical setup has 8 encoder LSTM layers and 8 decoder layers. Using a human side-by-side evaluation on a set of isolated simple sentences, GNMT showed a reduction in translation errors by an average of 60% compared to Google’s phrase-based production system. Zero-Shot Translation With NMT While GNMT provided significant improvements in translation quality, scaling up to new supported languages presented a significant challenge. The question here was, can they translate between a language pair which the system has never seen before? When the modified GNMT was tasked with translations between Korean and Japanese, where Korean⇄Japanese examples were not shown to the system. Impressively, the model generated reasonable Korean⇄Japanese translations, even though it has never been taught to do so. This was called “zero-shot” translation. Introduced in 2016, this is one of the first successful demonstrations of transfer learning for Machine Translation. Transformer: The Turning Point The introduction of Transformer architecture revolutionized the way we deal with language. In the seminal paper, “Attention is all you need”, Google researchers introduced Transformer, which later led to other successful models such as BERT. In case of translation, for example, if in a sentence, say “I arrived at the bank after crossing the river,” if the word “bank” is to be identified as something that refers to the shore of a river and not a financial institution, the Transformer immediately learnt it and made the decision in a single step. Another intriguing aspect of the Transformer is that the developers can even visualise what other parts of a sentence the network attends to when translating a given word, thus gaining insights into how information travels through the network. Translating Foreign Menu With A CNN 5 years ago, Google announced that Google Translate app can do a real-time visual translation of multiple languages. For example, when an image is fed to a Google Translate app, it first finds the letters in the picture. Then these words are isolated from the background objects like trees or cars. The model looks at blobs of pixels that have a similar colour to each other that are also near other similar blobs of pixels. Then the app recognises what each letter actually is with the help of a convolutional neural network. In the next step, the recognised letters are checked in a dictionary to get translations. Once found, the translation is rendered on top of the original words in the same style as the original. Speech Translation With Translatotron In traditional cascade systems, to translate speech, an intermediate representation is required. With Translatotron, Google demonstrated that a single sequence-to-sequence model can directly translate speech from one language into speech in another language, without the need for intermediate text representation, unlike cascaded systems. Translatotron is claimed to be the first end-to-end model that could directly translate speech from one language into speech in another language and was also able to retain the source speaker’s voice in the translated speech. For Reducing Gender Bias In Translate Using a neural machine translation (NMT) system to show gender-specific translations is a challenge. So last month, Google announced an improved approach — Rewriter — that uses rewriting or post-editing to address gender bias. After generating the initial translation, it is then reviewed to identify instances where a gender-neutral source phrase yielded a gender-specific translation. In that case, a sentence-level rewriter is applied to generate an alternative gendered translation. In the next step, both the initial and the rewritten translations are reviewed to ensure that the only difference is the gender. For data generation process, a one-layer transformer-based sequence-to-sequence model was trained. Apart from the above mentioned models, many other techniques, such as back-translation played a crucial role in the way we use translation apps. Know more about the advancements in Google Translate here.
|
Google Translate was launched 10 years ago. During the initial days, Google Translate was launched with Phrase-Based Machine Translation as the key algorithm. Later, Google came up with other machine learning advancements that changed the way we look at foreign languages forever. In the next section, we look at the machine learning methods used by […]
|
["Deep Tech"]
|
["Google Translate", "translate"]
|
Ram Sagar
|
2020-06-14T14:00:16
|
2020
| 742
|
["Go", "machine learning", "AI", "neural network", "Google Translate", "RAG", "BERT", "Aim", "translate", "transformer architecture", "CNN", "R"]
|
["AI", "machine learning", "neural network", "Aim", "RAG", "R", "Go", "transformer architecture", "BERT", "CNN"]
|
https://analyticsindiamag.com/deep-tech/google-translate-machine-learning/
| 3
| 10
| 1
| false
| true
| true
|
31,383
|
Does The Rise Of Robot Journalism Mean The End Of Newsrooms?
|
In a time where global newsrooms are becoming smaller due to enhanced technological advancement, robotics journalism has emerged as a threat to the fourth estate. Artificial Intelligence has introduced a new paradigm in present-day journalism and newsrooms across the globe are facing fears of staff cuts. Automated journalism has already made its way into newsrooms with automated news writing and distribution, without human supervision already a reality. ‘Jia Jia ‘was the first humanoid robot journalist created by developers from the University of Science and Technology in China’s Anhui province in April. She hit headlines when she reported for the country’s news agency Xinhua and conducted a live interview with an editor of a popular tech magazine. More recently, an upgraded version of this robotic journalist Zhang Zhou was witnessed in Chinese News Channel. Their official news network claimed it to be the world’s first artificial intelligence news anchor. Innovations In Robotic Journalism In robotic journalism, also called automated journalism or algorithmic journalism, news articles are generated by computer programs and AI software, rather than human reporters. Voice, tone, and style can also be customised depending on the desired output. AI companies such as Automated Insights, Narrative Science or Yseop are already developing and delivering such algorithms, chatbots, and automated reporting systems to newsrooms around the globe. Hold your breath: these robots can produce a story in just a matter of a second.The AP, for example, began publishing articles for earnings reports last year, using a software from Automated Insights. The Associated Press published a short financial news story: “Apple tops Street IQ forecasts”. The piece could be easily written by a human – but if someone reads it until the end, they would see that it was generated by Automated Insights or what we call a robotic journalist.One of the projects bringing artificial intelligence to newsrooms is INJECT, an AI-based tool making it easier to find original angles to a story.The Norwegian News Agency (NTB) started work on a project to generate automated football news coverage, which was launched in 2016. Together with experts in artificial intelligence, a group of journalists learned new skills whilst the robot was being “trained”, a decision crucial to the development of the algorithm.Thomson Reuters is also already using machine learning algorithms to write their stories. Google, on its side, has provided the British news agency Press Association with a $1 million grant to develop a computer program able to gather and write nearly 30,000 stories a month — a volume impossible to match manually. Will AI Take Over Journalism Jobs Completely? There are several benefits using a machine. For example, robots can act as an assistant, such as for writing up press releases, data-driven stories. They are even able to conduct face-to-face interviews and but cannot ask follow-up questions, craft a colourful feature story and in-depth analysis, or shoot and edit a package for TV broadcast. By automating routine stories and tasks, journalists can free up time for more challenging jobs such as covering events and investigative reporting.It also paves the way for greater efficiency and cost-cutting measures for news organisations struggling to survive. Robot journalism is cheaper because large quantities of content can be produced at quicker speeds. Apart from fears about even more job cuts in the media industry, there are obvious concerns about the credibility and quality of automated journalism and the use of algorithms.AI cannot replace human skills such as creativity, humour, or critical thinking in the newsroom, which are all crucial aspects for the media professional. Outlook Although, to date, there are no reports on robotic journalism affecting the job prospects in the Asian market, as per Laurence Dierickx, journalist and a research, it has started showing its effects in European countries. Dierickx released a few figures on how many are expected to lose their jobs as a result of AI till now. Other prospective studies also say that more journalists are likely to be affected (International Data Corporation 2016 and Ericsson 2017) but at the same time, these studies underline that jobs involving human interface will be preserved. According to Dierickx, there are a lot of contradictions, and no one can predict the future. But with automated journalism gaining ground in newsrooms, 2019 will prove to be a critical year for Journalism and Media.
|
In a time where global newsrooms are becoming smaller due to enhanced technological advancement, robotics journalism has emerged as a threat to the fourth estate. Artificial Intelligence has introduced a new paradigm in present-day journalism and newsrooms across the globe are facing fears of staff cuts. Automated journalism has already made its way into newsrooms […]
|
["AI Features"]
|
[]
|
Martin F.R.
|
2018-12-12T08:22:52
|
2018
| 716
|
["Go", "machine learning", "artificial intelligence", "TPU", "AI", "chatbots", "RAG", "Aim", "GAN", "R"]
|
["AI", "artificial intelligence", "machine learning", "Aim", "RAG", "chatbots", "TPU", "R", "Go", "GAN"]
|
https://analyticsindiamag.com/ai-features/does-the-rise-of-robot-journalism-mean-the-end-of-newsrooms/
| 3
| 10
| 2
| false
| true
| true
|
53,884
|
What’s New In Pandas 1.0?
|
Pandas — a Python library for data structure and analysis, has been one of the essential tools for data science. Imagining data science workflows in the absence of Pandas can be nothing less than a nightmare. Although analysis can be carried out without importing data into Pandas data frames, data scientists prefer Pandas due to its powerful attributes and methods that make the evaluation of data more accessible. “Pandas allows us to focus more on research and less on programming. We have found pandas easy to learn, use, and maintain. The bottom line is that it has increased our productivity,” said Roni Israelov, PhD portfolio manager, AQR Capital Management. On 9 January, Pandas released their Pandas 1.0 which enhanced its functionality at the same time its also depreciated others. This is the first major release which will help in optimising the data science practices. Besides, the latest release, Pandas will only be supported on Python 3.6 and above. It has further introduced a new support policy for all future versions of the library. While minor releases will depreciate functionalities, the major releases will pull out features. Here are the following changes that will have an impact on the workflow of developers. Handling Missing Values Datasets often have missing values, which makes causes hindrance during data analysis. Developers replace the missing values with null, NaN, or NA values. The common practice was, np.nan for float data, np.nan or None for object-dtype data, and pd.NaT for datetime-like data. This has led to different behaviours in arithmetic operations; pd.NA represented as missing as unknown in comparison operations. To mitigate the problems associated with different approaches, Pandas’ new feature has clubbed all of them into one with NA. As this is a significant change, the organisation has considered it an experimental and might change the functionality if required. Introduction Of String Data Type Until now, strings in NumPy arrays were stored in object-type. However, one could also store non-string data in the array that was only supposed to have strings, thereby, causing barrier in maintaining strings only array. Since the object-dtype didn’t check for strings before appending the value, often integer and floats were slipping in, while working with the string arrays. Consequently, now Pandas’ new feature has a dedicated string extension, which will ensure that the array has only string objects. Besides, the StringArray will now display string instead of the object as dtype, improving the readability. The type can be specified as dtype = pd.StringDtype() or dtype = “string” Handling Missing Values In Boolean Data Type Boolean only had two values True and False, but this caused hindrance when the data was missing. Missing values were treated as False, resulting in biased data. Therefore, Pandas’ new feature includes an extension type for keeping track of missing values with BooleanDtype and BooleanArray. This will result in <NA> in case of missing values, improving the quality of the data. Increased Performance With Numba The apply() function is a powerful function that enabled manipulation of data on series or the data frame by passing a function. Additionally, apply() be used with an engine called Numba for preforming rolling computation. Since developers use huge data sets, the speed of apply() with Numba was too slow. However, with a new engine Cython, one can gain significant performance gains with large data sets. Data Frame Summary Readability One of the annoying thing about data frames in Python vs data frames in R was the readability of data. However, now the Pandas has enhanced the output of DataFrame.info to help developers assimilate data effortlessly. The DataFrame.info() will now display line numbers for the columns summary when used with verbose=True. Deprecations Quite a few of the features have been depreciating. Still, the most notable one is the selection of columns from DataFrameGroupBy object, passing list of keys or tuple of keys for subsetting is deprecated. One should now use a list of items instead of keys. While another most used change is in DataFrame.hist() and Series.hist(); now, the figsize will not have a default value, and one needs to pass tuple for the desired plot size. Bug Fixes The new release has also done away with numerous bugs to improve reliability during data analysis. The fillna() used to raise the ValueError when it encountered a value other than categorical data. Thus, it has now included assert to test the inconsistency and solve the exceptions. Besides, in categorical values also resulted in undesired output when they were cast into integers. Especially, with NaN values, the outputs were incorrect; thus, the updated Pandas is free from this bug.
|
Pandas — a Python library for data structure and analysis, has been one of the essential tools for data science. Imagining data science workflows in the absence of Pandas can be nothing less than a nightmare. Although analysis can be carried out without importing data into Pandas data frames, data scientists prefer Pandas due to […]
|
[]
|
[]
|
Rohit Yadav
|
2020-01-15T13:00:00
|
2020
| 766
|
["data science", "NumPy", "Go", "TPU", "API", "AI", "Python", "Ray", "R", "Pandas"]
|
["AI", "data science", "Ray", "Pandas", "NumPy", "TPU", "Python", "R", "Go", "API"]
|
https://analyticsindiamag.com/ai-features/whats-new-in-pandas-1-0/
| 2
| 10
| 0
| true
| false
| true
|
10,052,570
|
Human-Centred Digital Platform Infogain Appoints Anil Kaul As Its Chief AI Officer
|
Silicon Valley-based human-centred digital platform and software engineering services, Infogain, announced the appointment of Anil Kaul, PhD, as its Chief Artificial Intelligence (AI) Officer. He will also continue to serve as EVP – AI and Analytics at Infogain and CEO at Absolutdata, an Infogain company. Dr Kaul will guide Infogain customers through their AI-led digital innovation and transformation journeys and enable them to build cloud data platforms and to install and run AI algorithms. He will also build AI competency across Infogain’s delivery organization and enable internal process improvements in Infogain Finance, HR and other support functions. Infogain will also more than double its investment in its NAVIK AI platform, including enhancing its seven key products and adding more use cases. “I look forward to partnering with our customers on their AI-led digital innovation & transformation journey. Today, we can take petabytes of structured and unstructured data such as text, images and video and combine these through AI techniques such as deep neural networks and machine learning to produce ready-to-action recommendations for front line business teams resulting in the significant incremental bottom line and top-line impact,” said Dr Kaul. Dr Kaul first worked with AI during his PhD at Cornell, running neural network models on the Cornell Supercomputer. Over the past 25 years, he has developed deep expertise in combining AI, Analytics and Data to solve complex client business problems. CEO Ayan Mukerji, said “I am excited about Anil’s new role at Infogain. As we continue to build our AI capabilities, Anil’s appointment deepens our commitment to building human-centred platforms that combine people, cloud and AI. In Anil, we have both an experienced leader and a thought leader. I wish Anil the very best in his new role”. Many Infogain customers already use AI on the NAVIK AI platform. The company says a Fortune 50 computer software company uses AI to optimize marketing campaigns, a Fortune 100 company achieved a 3% revenue uptick through AI-driven assortment recommendations, and an insurance tech product uses AI in its photo-based claim estimation feature. Infogain supports these and other customers with over 350 advanced analytics and AI engineers in its design-and-build centres in Gurugram (Gurgaon), Noida, and Seattle.
|
Dr Kaul will guide Infogain customers through their AI-led digital innovation and transformation journeys and enable them to build cloud data platforms and to install and run AI algorithms.
|
["AI News"]
|
["AI (Artificial Intelligence)", "analytics companies", "Artificial Intelligence India", "Data Science", "Data Scientist", "Deep Learning", "Machine Learning", "PhD"]
|
Victor Dey
|
2021-10-28T18:58:59
|
2021
| 364
|
["Go", "artificial intelligence", "machine learning", "PhD", "AI", "neural network", "Artificial Intelligence India", "Machine Learning", "Git", "analytics companies", "Aim", "analytics", "GAN", "Deep Learning", "Data Science", "Data Scientist", "R", "AI (Artificial Intelligence)"]
|
["AI", "artificial intelligence", "machine learning", "neural network", "analytics", "Aim", "R", "Go", "Git", "GAN"]
|
https://analyticsindiamag.com/ai-news-updates/human-centred-digital-platform-infogain-appoints-anil-kaul-as-its-chief-ai-officer/
| 3
| 10
| 2
| true
| true
| false
|
10,077,403
|
How Did This VFX Company Win Seven Oscars
|
One of the Hindi film industry’s most extravagant projects—‘Brahmastra’—released a few weeks back. Even years before its release, the film was widely discussed and anticipated for two reasons—the massive budget that it was created on (estimations suggest over INR 400 crore) and the spectacular VFX effects. The movie reportedly had 4,500 VFX shots, higher than any movie made. Double Negative (DNEG), a British-Indian visual effects and computer animation company, was behind the VFX of Brahmastra. This is the same company which worked on Dune, Ex-Machina, Blade Runner 2049, Interstellar, and Inception and won seven Oscars. Analytics India Magazine caught up with Namit Malhotra, the chairman and CEO of DNEG, to talk about the animation industry, how evolving technology has affected it, and the future course. Hailing from a film background (his father, Naresh Malhotra, a producer-director and his grandfather MN Malhotra, a cinematographer), the influence of cinema on his life was immense. “My aspiration was to become a filmmaker. I grew up on movies like Jurassic Park and Forrest Gump, and this kind of cinema really enthused me. As I grew up, the idea of helping filmmakers tell stories and create great cinema by bridging the gap with technology really got me on this journey,” said Malhotra. Malhotra started his journey in this field in the 1990s when animation was seen as a mere correction tool; cut to 2022, VFX and effects have become the crowd pullers. When we asked Malhotra about this transformation, he said, “To be honest, it is also dependent on how things develop with time. Back then, there was a lot of limitation on what technology could do—it was also very expensive and time-consuming. With time, animation has become more than just a tool to fix a scene; it has introduced a new way of doing things which have truly changed the grammar of filmmaking. We basically say to filmmakers, if you can dream it, we can do it. And that’s sort of the power of the technology and the capabilities that we have.” What role is AI playing Continuing the talk on how technology has changed the animation industry and cinema in general, we could no longer ignore the elephant in the room—AI. For all the industries that we can think of today, AI has been a source of transformation—big or small, notwithstanding. The same is true for the animation industry too. “AI, for sure, has started to play a bigger role in what we do; today, we are relying on AI to drive different sorts of capabilities. For example, if we are working on creating a model of a tiger walking, AI has made that easier and faster for us. All you need to do is to feed the AI system with a bunch of behaviour and intelligence,” said Malhotra. He added that AI helps in cutting down on a lot of laborious work. We were tempted to ask him the very obvious question of whether AI would replace human animators. To this, he responded that while AI can help with redundant work, it would not be replacing human animators. “AI can help in bringing greater consumption in different places or in a different way; for example, people would be now able to create things that only a big film can afford. Like, people would want to create things that only a big film can afford. Suddenly, there will be more types of projects and other applications benefitting from the use of AI. I don’t see the overall demand for human animators going down. But, it would definitely help those with limited resources.” About the future Malhotra told us that at the end of every project, accolades notwithstanding, the team spares no time and quickly gets back to the drawing board and starts again for the next project. “Your years of experience gives you a lot of confidence in some perspective on things, but to be honest, every project we work on, you almost kind of go back, start over, and rechalk the plan and vision so that audiences this time around are not gonna get fatigued or bored by it,” he said. He proceeded to tell us that metaverse offers a major opportunity for this industry, and his company is uniquely positioned to bring that to the fore. “Metaverse will help create an alternate universe in which everything is going to be more of an experience—it could be taking the audience for ‘a trip to the moon’, or ‘discovering the underwater world’. I think it is going to be very powerful in terms of how we can start to impact the human experience more than just entertainment. And, I think that’s where I feel we have a great opportunity to go beyond what movies have done till now,” he said. Moving beyond the technical aspects, Malhotra feels passionate about the role that Indian cinema is set to play globally: “In a more philosophical sense, we want to continue to produce content from India and show it to the world.” In the recently released ‘Brahmastra’, Malhotra was also one of the producers. Beyond this, Namit Malhotra is keen on nurturing artists coming from underprivileged backgrounds. “These kids should be given opportunities to be part of this industry. Given the right resources and tools, I think these talented children would be able to undertake several artistic pursuits,” he concluded.
|
Metaverse will help create an alternate universe in which everything is going to be more of an experience
|
["AI Features"]
|
["Animation", "Interviews and Discussions", "VFX"]
|
Shraddha Goled
|
2022-10-17T16:00:00
|
2022
| 894
|
["Go", "programming_languages:R", "AI", "programming_languages:Go", "VFX", "analytics", "Animation", "GAN", "R", "Interviews and Discussions"]
|
["AI", "analytics", "R", "Go", "GAN", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-features/how-did-this-vfx-company-win-seven-oscars/
| 2
| 7
| 1
| false
| true
| false
|
50,230
|
What Does It Take To Be A Good Data Scientist? – By AIM & Simplilearn
|
Over the years, the term ‘data scientist’ has evolved greatly. From describing a person who handles data, to a professional who leverages machine learning — this definition has seen a great deal of change. Now, circa 2019, there are numerous blogs, Reddit pages and Quora threads dedicated to the discussion about “how to become a good data scientist”. The spectrum of data scientist roles and the myriad of duties he/she has to perform is so broad, that it is very difficult to capture it under one single definition. Now, the data science sector is flourishing to such an extent that our earlier jobs study revealed that there are currently more than 97,000 job openings for analytics and data science in India right now. It is true that the “hottest job of the 21st century” has all the buzz, glam and traffic, but many enthusiasts are still confused as to what this job entitles. Fewer still, understand, what it takes to be a data scientist. Is it about the skill set? Is it about the education? Is it about the company you work for? Or is it all of the above? About The Study This study was conducted in association with Simplilearn. The data for this study was collected by asking respondents to fill in a survey created by Analytics India Magazine about the popular beliefs around what it takes to be a good data scientist. This included various sub-topics such as upskilling, employment, skill set, and industry trends, among others. We took opinions from all those who practice data science — from professionals with less than two years of experience to CXOs — to get a thorough idea of the working environment in this growing field. Our survey was met with much enthusiasm — and we got some great insights from it. Some of them were expected, and many of them were real eye-openers. Data Science Skill Set In an interaction with Analytics India Magazine, Dr Krishnan Ramaswami, Head of Data Science at Tesco, listed out the ideal skills required in a successful data scientist. Mathematical and statistical knowledge Good knowledge of machine learning algorithms Awareness on programming languages like Python and R which are more tuned for data science Handling large datasets Domain knowledge Problem-solving ability Dr Ramaswami also emphasised on hands-on experiences. He said, “Participation in any machine learning competitions would be an added advantage as they serve as additional validation of their skills. Experience of developing any real-life problems either during their course work or exposures through projects is also beneficial.” In this study, Analytics India Magazine, in association with Simplilearn, is taking a deeper look at what steps data science professionals take to kick-start and further themselves in their careers. What Kind Of Work Does A Data Scientist Want To Work On? This question is of key importance to almost all data scientists because as this is a nascent field, there are still some grey areas about the definitions. That is why the first question we asked during our exhaustive survey was about the kind of work the data scientists preferred to do. A large chunk of 36.9% of respondents said that they would ideally like to do more of modeling.34% of the respondents said that they would also like to do business interaction and draw business insights.However, 55.3% of the respondents said that they were interested in all of the aspects in data science — modelling, data visualisation, coding, database engineering and business interaction. How Will You Reach Your Goal Of Getting A Job In Data Science? As mentioned earlier, the demand for data science across the industries is real. Over the past few years, the domain has been experiencing a rapid rise in jobs across the world and India is one such country that is experiencing a data explosion. However, there is also a disparity between the number of jobs available and the number of applicants. Data science aspirants say that getting a job is difficult, and noted enterprises complain that the biggest problem they face is of talent crunch — especially in data science. But why is that? That’s why, when we asked about what was the best way to get ahead in their career, we got very interesting responses. 41.7% respondents said that they would like to update their skill sets.And 24.3% of the respondents said that networking with industry insiders was the way to move ahead in their careers. Upskilling Most companies and business leaders are posed with the question of how to build capabilities in an organisation so that they can ride the wave of digital transformation. Companies are desperately looking for people with skillsets that meets the need of the growing technological demands. When we asked our respondents what was their preferred choice of upskilling, the results were interesting. Over 33% of our respondents said that they would prefer to register for an online course19% of the respondents said that they would like to take a break from their current work or study schedule and pursue data science as full-time education.About 48% of the respondents said that they would like to study on their own, with the help of MOOCs and other free resources. Self Study In this competitive day and age, enterprises are focussing on not just looking for candidates trained in a single skill but an individual who knows a cluster of skills which will be relevant for more number of years. Some of the skills that are currently picking up are: AutomationRPARoboticsCybersecurityArtificial intelligenceIoTConnected devicesFinTechBlockchain Our respondents had numerous options when it came to self-study. For instance, 67% of our respondents preferred to take free online courses in data scienceOver 62% of our respondents preferred to watch YouTube videos and tutorials to learn moreAnd almost 44% relied on books Networking With Industry Insiders Building social capital by engaging in social groups is a crucial aspect of strategising career success. Networking helps in accomplishing contacts that can be helpful in getting recommendations, and in turn, better job opportunities. Data scientists’ career also hinges on making meaningful contacts and creating lasting bonds with interesting and influential people. There are many avenues for the same, but our respondents had interesting insights regarding this: 68% of our respondents said that professional social media portal LinkedIn was their choice of tool to start networking in this ever-growing professionOver 62% of our respondents said that attending conferences with interesting subject and like-minded people was immensely helpful in their careersBlogs and social media were endorsed by 55% of our respondentsAnd meetups was another helpful avenue for 54% of the participants Showcasing Skills For a data scientist, showcasing his/her abilities in coding and other software capabilities is crucial. An important part of data science is to stay relevant in the industry by showcasing your interest, expertise and unique opinion in all as many relevant subjects as possible. Our participants had interesting insights regarding this. 63% of our respondents said that the best way to showcase skills would be through hackathons59% said that keeping their Github page updated and active was the way to goAnd over 53% of the respondents said that blogging about their experiences and best practices was the key to update the community and stay connected with them How To Apply For Data Science Jobs As tools and techniques in data science are evolving, jobs in this sector are maturing and gaining more and more prominence. The number of openings that companies have for data science roles is also on an all-time high. That is why, it makes a huge difference as to what company you’re applying to, and what avenue you’re using to approach it. Almost 46% of our respondents said that they apply for job openings through job portals like Naukri, Monster, Shine, etc, among others.16% of the respondents said that they preferred connecting to companies via social media15% of the participants said that they also were in touch with recruitment consultants for jobs Participants’ Profile You can download the complete study here:
|
Over the years, the term ‘data scientist’ has evolved greatly. From describing a person who handles data, to a professional who leverages machine learning — this definition has seen a great deal of change. Now, circa 2019, there are numerous blogs, Reddit pages and Quora threads dedicated to the discussion about “how to become a […]
|
["AI Features"]
|
["AI What it Does", "blockchain tutorial", "Data science skills"]
|
Prajakta Hebbar
|
2019-11-19T13:31:55
|
2019
| 1,332
|
["data science", "Go", "artificial intelligence", "machine learning", "AI", "R", "Git", "RAG", "Python", "blockchain tutorial", "AI What it Does", "analytics", "Data science skills"]
|
["AI", "artificial intelligence", "machine learning", "data science", "analytics", "RAG", "Python", "R", "Go", "Git"]
|
https://analyticsindiamag.com/ai-features/what-does-it-take-to-be-a-good-data-scientist/
| 4
| 10
| 3
| false
| true
| true
|
10,166,253
|
Why Apollo.io Switched from GitHub Copilot to Cursor
|
Built on top of Microsoft’s Visual Studio Code, Cursor set out with a clear ambition: to go beyond existing AI coding tools like GitHub Copilot, which had already gained popularity since its official launch in 2022. California-based go-to-market (GTM) platform Apollo.io swiftly moved its engineering team from GitHub Copilot to Cursor, which seemed to have “gotten it right”. In an interview with AIM, Himanshu Gahlot, VP of engineering at Apollo.io, and Saravana Kumar, head of machine learning at Apollo.io, discussed the reason behind their engineering team’s switch from GitHub Copilot to Cursor. California-based go-to-market (GTM) platform Apollo.io is a B2B sales platform powered by AI. It is designed to empower revenue teams with cutting-edge sales intelligence and engagement tools. “We started using GitHub Copilot early last year when it had just launched. We noticed it and quickly started using it,” Gahlot shared. “Slowly, we realised that there are better tools out there, or at least the ones we could use even more effectively within our company.” Is Cursor Really So Special? Explaining in simpler terms, Gahlot said that Cursor uses a newer approach in AI in which you can ask it to do several things at once, and it takes care of them in one go. The team ran a pilot program using Cursor with their engineers, and the response was overwhelmingly positive. “We got a 90% plus satisfaction rate. Almost every engineer said positive things about being able to understand the whole code base and generate the right things,” Gahlot added. But while the tool showed promise, Gahlot warned that there is a lot of hype around AI tools that don’t always match reality. “It did come with a caveat. You’d often hear people hyping these tools, claiming productivity gains of 25x or even 50x — but it’s very nuanced,” he said. Gahlot added that these tools work really well if starting from scratch—what he calls “0 to 1” use cases. They can be incredibly helpful when building something new or just putting together a prototype or demo. But things get tricky when dealing with large, complex code bases that have been developed over many years. “When it comes to a 10-year-old code base with millions of lines of code and like 30-40,000 individual files, then that is not how you would use it,” Gahlot said. In such cases, Gahlot said, teams need time to figure out the right way to use the tool, and engineers need proper training. Adding to this, Kumar pointed out that people tend to either overestimate or underestimate the capability of AI tools. “I would say that is not even an important aspect,” he said, referring to converting natural language into code. “The important aspect is to understand what it can and can’t do. He explained that if somebody is able to clearly explain what they want, including all the details, assumptions, and context, turning that into a working code is mostly handled by AI tools now. “What is actually not done [by them] is figuring out how we solve the problems. That’s where humans come in,” Kumar said. Reason Behind the Shift Gahlot said they moved from GitHub Copilot to Cursor because, though the former was doing pretty well in terms of auto-completion and small code generation, their engineers were not finding much success. “There wasn’t an “aha!” moment there. It wasn’t like, you know, you want to get something done, and it would just do it for you.” However, when the team tried out Windsurf and Cursor, they found their “aha!” moment. “It is like, you can chat with your code, write anything you want done, especially in a zero-to-one use case, and it just does it for you, rather than completing part of your code or suggesting a few things,” he explained. Apollo.io began adopting Cursor more widely and started seeing higher satisfaction among engineers using the IDE. However, as he pointed out, it’s not a one-size-fits-all solution. He explained that different roles within engineering teams have different needs. The same applies to machine learning engineers, back-end developers, and front-end teams. Other Big-Tech Collaborations Apollo.io has now onboarded three major AI providers, OpenAI, Anthropic, and Google, and it is constantly experimenting with their new models. Gahlot believes the future will see businesses relying on multiple AI models for different tasks. Gahlot also spoke about the company’s early collaboration with Anthropic. “We have been early partners with Anthropic on multiple things, specifically on the model context protocol (MCP) initiative that they recently launched,” he said. “We were one of the first companies they launched MCP with. I think initially there were about 10 startups, and we were one of them.” Currently, out of the 700 employees at Apollo.io, about 200 are spread across engineering, product, recruiting, sales, and support in India. Out of those, about 160 are in engineering and 40 in other departments, constituting 65% of its engineering team in India.
|
Cursor, an integrated development environment (IDE) designed to be “AI native,” has been making quite a noise since its launch in January 2023
|
["AI Features"]
|
["AI", "cursor", "GitHub"]
|
Shalini Mondal
|
2025-03-18T19:00:00
|
2025
| 820
|
["Anthropic", "cursor", "Go", "machine learning", "OpenAI", "AI", "Git", "Aim", "GAN", "GitHub", "R"]
|
["AI", "machine learning", "OpenAI", "Anthropic", "Aim", "R", "Go", "Git", "GitHub", "GAN"]
|
https://analyticsindiamag.com/ai-features/why-apollo-io-switched-from-github-copilot-to-cursor/
| 3
| 10
| 3
| false
| false
| false
|
10,047,885
|
Metaverse: The New Buzzword In The Tech World
|
“Even if the Metaverse falls short of the fantastical visions captured by science fiction authors, it is likely to produce trillions in value as a new computing platform or content medium. But in its full vision, the Metaverse becomes the gateway to most digital experiences, a key component of all physical ones, and the next great labour platform,” said Matthew Ball, venture capitalist, in his 2020 blog on Metaverse. In the world of technology, every once in a while, a concept or an idea erupts as the most disruptive development, taking the whole industry by storm. Companies, big or small, hop on this hype vehicle almost instantly, expecting good RoI; that said, whether or not these developments live up to the hype is a discussion for some other time. ‘Metaverse’ is one such concept that is currently having its moment in the sun. Derived from the sci-fi literature of the 90s, the metaverse in current times has been popularised and advocated heavily by Facebook founder Mark Zuckerberg. Recently, motor vehicle company BMW introduced its ‘globally unique virtual world: Joytopia’. This new streaming platform from BMW will allow users to independently navigate through three unique worlds as an avatar, along with a map and sign for help. These avatar forms will have the shape and form chosen by the users, and they can run, jump, or fly. Metaverse is the driver behind Facebook’s Oculus VR and its newly announced Horizon virtual world. Further, as the expenditure on cloud computing increases, such technologies will drive the online-offline future. What is Metaverse Originally, Metaverse’s earliest mention happened in dystopian sci-fi novels where virtual universes provide an escape from failing societies. This very concept has now evolved into a moonshot goal for Silicon valley and a buzzword in the tech circles. The idea is to create a similar space on the internet where users’ digital avatars can walk and interact with one another in real-time. Metaverse can be thought of as a collection of a bunch of worlds. Online social games like Fortnite and user-created virtual worlds like Minecraft reflect some of the ideas of the Metaverse. But Metaverse is much bigger than these in a way that it is not tied to any one app or a single place, as explained by Rev Lebardian, VP of simulation technology, NVIDIA. Speaking of NVIDIA, the company has gone all out in this new field. Last year, the company launched a platform called Omniverse. CEO Jensen Huang said the inspiration for this platform came from Neal Stephenson’s 1992 sci-fi novel ‘Snow Crash’. In a separate interview, Huang referred to Omniverse as ‘metaverse for engineers’. Last month, the same platform was used to create a virtual replica of Huang, who delivered a part of the keynote speech at the NVIDIA GTC Conference. Nvidia Reveals Its CEO Was Computer Generated in Keynote Speechhttps://t.co/kWE8h5APpz— hardmaru (@hardmaru) August 13, 2021 Facebook CEO Mark Zuckerberg has been quite vocal about his excitement for tech’s latest buzzword. He has said that Facebook’s future is a metaverse. Facebook’s Metaverse will go beyond just gaming and include the workplace, entertainment and others to create a ‘social experience’ for its users. The company has also invested in virtual reality, with almost 20 per cent of its employees working exclusively on VR and AR and recent acquisitions like BigBox VR and Unit 2 Games. The VR segment accounts for 3 per cent or less of Facebook’s top line. Companies like Intel and Unity Software have spoken about Metaverse. Microsoft CEO Satya Nadella also discussed the concept of ‘enterprise metaverse’ in July during the company’s earnings release. In South Korea, a metaverse alliance is working to persuade companies and the government to collaborate and develop an open national VR platform. Such a platform would aim to blend smartphones, 5G networks, augmented reality, and social media to solve societal problems. Wrapping up Founders, investors, tech executives, and futurists have all tried to claim a stake in the Metaverse, building on its potential for entertainment, experimentation, social connection, and ultimately profit. Ball says that Metaverse is not a virtual world or a space but can be seen as a successor to the mobile internet, a framework for a highly connected life. He also believes that there is no clear distinction between before and after Metaverse.
|
Derived from the sci-fi literature of the 90s, Metaverse in current times has been popularised and advocated heavily by Facebook founder Mark Zuckerberg.
|
["Global Tech"]
|
["Facebook", "Metaverse"]
|
Shraddha Goled
|
2021-09-07T18:00:00
|
2021
| 716
|
["Go", "API", "programming_languages:R", "cloud computing", "AI", "Metaverse", "venture capital", "Git", "Aim", "Facebook", "llm_models:Bard", "R"]
|
["AI", "Aim", "cloud computing", "R", "Go", "Git", "API", "venture capital", "llm_models:Bard", "programming_languages:R"]
|
https://analyticsindiamag.com/global-tech/metaverse-the-new-buzzword-in-the-tech-world/
| 2
| 10
| 2
| false
| false
| false
|
49,846
|
Researchers Use Artificial Intelligence To Design HIV Treatment Plan
|
Image Source : pixabay In what can be termed as one of the most productive combinations of technology and medicine, a new research has now harnessed the power of artificial intelligence to find out the accurate dosage for treating patients with HIV. One of the problems while treating an HIV patient is that traditionally he/she is administered the same antiretroviral therapy (ART) regimen for life, even if his/her viral load has been reduced by several orders of magnitude from the initial viral load. Some HIV drugs are associated with serious side-effects, which can be so bad that patients refuse to take the recommended doses, increasing the risk of disease progression. Now, with the help of AI, researchers have discovered that drugs and doses inputs can be related to viral load reduction through a Parabolic Response Surface (PRS). This new method can rationally guide a clinically‐actionable approach to identify optimised population‐wide and personalized dosing. By using the new AI-powered method, researchers saw a 33% reduction in the long‐term TDF maintenance dose (200 mg) compared to standard regimens (300 mg). This regimen keeps the HIV viral load below 40 copies/mL with no relapse during a 144‐week observation period. “This study demonstrates that AI‐PRS can potentially serve as a scalable approach to optimize and sustain the long‐term management of HIV as well as a broad spectrum of other indications,” said the researchers. Reportedly, 10 patients took the treatment recommended by the AI for 144 weeks. No significant side effects were reported and all 10 successfully completed the treatment course.
|
Image Source : pixabay In what can be termed as one of the most productive combinations of technology and medicine, a new research has now harnessed the power of artificial intelligence to find out the accurate dosage for treating patients with HIV. One of the problems while treating an HIV patient is that traditionally he/she […]
|
["AI News"]
|
[]
|
Prajakta Hebbar
|
2019-11-13T16:33:27
|
2019
| 256
|
["artificial intelligence", "programming_languages:R", "AI", "ML", "Scala", "programming_languages:Scala", "R"]
|
["AI", "artificial intelligence", "ML", "R", "Scala", "programming_languages:R", "programming_languages:Scala"]
|
https://analyticsindiamag.com/ai-news-updates/researchers-use-artificial-intelligence-to-design-hiv-treatment-plan/
| 2
| 7
| 0
| false
| true
| true
|
10,134,901
|
G42 Launches Advanced Hindi Language Model NANDA at UAE-India Business Forum
|
G42, the UAE-based AI company, launched NANDA, a new Hindi Large Language Model (LLM), at the UAE-India Business Forum in Mumbai on September 10 2024. The announcement was made in the presence of His Highness Sheikh Khaled bin Mohammed bin Zayed Al Nahyan, Crown Prince of Abu Dhabi, and was also attended by Hon. Minister of Commerce Mr. Piyush Goyal. NANDA is a 13-billion parameter model trained on approximately 2.13 trillion tokens of language datasets, including Hindi. The LLM is the result of a collaboration between Inception (a G42 company), Mohamed bin Zayed University of Artificial Intelligence, and Cerebras Systems. The model was trained on Condor Galaxy, one of the world’s most powerful AI supercomputers for training and inferencing, built by G42 and Cerebras. “India has solidified its position as a global technology leader, driven by transformative initiatives like Digital India and Startup India under Prime Minister Narendra Modi’s leadership. As the country stands on the brink of AI-powered growth, G42 is proud to contribute to this journey with the launch of NANDA in support of India’s AI ambitions,” said Manu Jain, CEO – G42 India. This launch follows G42’s successful introduction of JAIS, the world’s first open-source Arabic LLM, in August 2023. With NANDA, G42 aims to replicate this success and empower India’s scientific, academic, and developer communities by accelerating the growth of a vibrant Hindi language AI ecosystem. “G42 has a strong track record in the development of language and domain-specific LLMs. With NANDA, we are heralding a new era of AI inclusivity, ensuring that the rich heritage and depth of Hindi language is represented in the digital and AI landscape. NANDA exemplifies G42’s unwavering commitment to excellence and fostering equitable AI,” said Dr. Andrew Jackson, Acting CEO of Inception, a G42 company. G42 has partnered with global tech leaders, including a recent $1.5 billion investment from Microsoft. It has collaborated with OpenAI as well, the company behind ChatGPT. The company has been involved in various international projects, including a $1 billion digital ecosystem initiative for Kenya in partnership with Microsoft.Earlier this year ,M42, the healthcare subsidiary of AI conglomerate of Abu Dhabi-based G42, unveiled two new open-access versions of Med42 LLM.
|
The model was trained on Condor Galaxy, one of the world’s most powerful AI supercomputers for training and inferencing, built by G42 and Cerebras.
|
["AI News"]
|
["G42"]
|
Siddharth Jindal
|
2024-09-10T21:06:08
|
2024
| 364
|
["Go", "ChatGPT", "artificial intelligence", "OpenAI", "AI", "G42", "Git", "GPT", "Aim", "ViT", "R"]
|
["AI", "artificial intelligence", "ChatGPT", "OpenAI", "Aim", "R", "Go", "Git", "GPT", "ViT"]
|
https://analyticsindiamag.com/ai-news-updates/g42-launches-advanced-hindi-language-model-nanda-at-uae-india-business-forum/
| 2
| 10
| 2
| false
| false
| false
|
10,167,851
|
85% of ServiceNow India Resources are in R&D & Engineering, says President Paul Smith
|
Agentic AI is no longer just a buzzword—it’s a tangible force reshaping how enterprises approach productivity. According to Paul Smith, president of global customer and field operations at ServiceNow, the rise of agentic AI is generating not just curiosity or anticipation, but real, measurable impact across organisations. “There is hype, and yes, there’s FOMO (fear of missing out),” Smith admitted while speaking with AIM. Moreover, he noted that customers deploying agentic AI are seeing phenomenal results—reportedly achieving efficiency gains of 10%, 20%, 30%, even up to 40%. Smith took BT, the telecom giant partially owned by Airtel, as an example, and explained that by leveraging ServiceNow’s Now Assist platform, the company has managed to resolve customer service calls 55% faster. Internally, ServiceNow has gone all in on AI too. The company currently runs over 200 agentic use cases in production, generating $325 million in annual savings. A standout example comes from ServiceNow’s internal finance help desk. “Before AI, the average turnaround was four days,” Smith said. “With ServiceNow agents, it’s now down to eight seconds for most queries.” This massive time saving has allowed ServiceNow to redeploy its finance staff to more strategic roles, without resorting to job cuts. ServiceNow has four of the top five banks in India as customers. “I am working really hard to be able to say it is five of the top five,” Smith said. How is India Contributing to the Global Agentic AI Push for ServiceNow “One in five of all ServiceNow employees is based in India,” said Smith. “And 85% of the resources that we have in ServiceNow India are in R&D and engineering roles—especially really core engineering.” He went on to highlight that a significant number of these engineers are working on leading-edge research around AI, underlining India’s role in building the future of enterprise automation. This isn’t a recent shift either. Over the last three years, ServiceNow’s India engineering team has grown at an average annual rate of 25%, cementing its place as a crucial node in the company’s global innovation network. It’s the engineering talent in India that’s quietly laying the groundwork for much of that innovation. The ServiceNow Yokohama release, unveiled just weeks ago, is packed with AI-first innovations. These include a suite of pre-built agents, AI Agent Studio for custom development, and the Agent Control Tower. ServiceNow’s biannual platform releases, named after cities—from Aspen to Xanadu to Yokohama, and Zurich on the horizon—symbolise the company’s steady innovation. The speed of innovation is also accelerating. “We’re now doing monthly releases,” Smith revealed. “So there will be releases before Zurich, and will bring even more announcements.” Smith’s conversations with Sumeet Mathur, who leads the India business for ServiceNow, reinforce this trajectory. Smith said that there’s an enormous appetite among Indian firms to partner with ServiceNow, not just because of the tech, but because of the one platform, one architecture, and one data model. One Platform, One Data Model, One Architecture A central pillar of ServiceNow’s strategy and a key reason Smith joined the company is its single-platform architecture. “This was Fred Luddy’s (ServiceNow founder) vision 20 years ago,” Smith said. “You start by managing your tech assets, then move to employee experiences, and finally to customer outcomes.” The unified architecture is not just a differentiator but a driver of massive efficiency. A blog from ServiceNow claims that this approach has over three million hours, which includes employees and customers. Smith provided several examples to back this up. At Visa, the platform is being used to handle credit card dispute resolutions 30% faster. Siemens started with IT and HR workflows and is now applying ServiceNow to customer service and field operations. “This is the beauty of it being one platform,” he said. Smith said that the Agentic Control Tower is generating the most excitement among clients, especially in India. “Whether you’re LTIMindtree or one of the top banks in India, you’re going to have agents from us, from Infosys, TCS, or even built in-house. You need a way to control them—understand what each one can access, what decisions it can make. That’s the Control Tower.” He emphasised that ServiceNow, which has long been seen as a “control tower” for technology operations, is now evolving to be the control centre for AI deployments across the enterprise. Citing a UK-headquartered pharmaceutical company that ServiceNow is working with as an example, Smith said they have implemented AI across the business. This has led to a projected 20% boost in overall performance. The firm is using AI to automate routine back-office and shared service tasks, which allows them to reallocate staff to more critical areas, like frontline drug discovery. “If that reallocation helps them bring a drug to market three months faster, that’s not just a win for the company—it’s a win for patients and healthcare globally,” Smith said. Smith also pointed to a broader ambition—ServiceNow’s entrance into the customer relationship management (CRM) space. “Thirty years on from the launch of traditional CRM vendors, most companies still haven’t truly transformed their CRM,” he said. ServiceNow is working with one of the world’s largest automakers to solve this, including warranty issues from the dealer to the original equipment manufacturer (OEM) to the supply chain. On the acquisition front, Smith shared insights into ServiceNow’s recent agreement to acquire Moveworks, an agentic AI-first company with a few hundred customers. “We have 8,400 customers. When our customers tell us they want this to come together, we listen,” Smith said. “Moveworks has re-platformed to be agentic-first. It brings incredible technology and a great team.” While the deal is still undergoing regulatory approvals, Smith expects it to close by summer and sees the merger as a logical and powerful combination.
|
The company currently runs over 200 agentic use cases in production, generating $325 million in annual savings.
|
["IT Services"]
|
[]
|
Mohit Pandey
|
2025-04-13T10:00:00
|
2025
| 949
|
["Go", "API", "agentic AI", "AI", "ETL", "RAG", "Aim", "ViT", "GAN", "R"]
|
["AI", "agentic AI", "Aim", "RAG", "R", "Go", "API", "ETL", "GAN", "ViT"]
|
https://analyticsindiamag.com/it-services/85-of-servicenow-india-resources-are-in-rd-engineering-says-president-paul-smith/
| 4
| 10
| 4
| false
| true
| true
|
10,084,546
|
Chips Fuel F1 Cars’ Record Performances
|
Today, semiconductor chips power everything from switching circuits to allowing electronic devices to function and respond to user commands with precision. As such technologies continue to evolve, so do their use-cases across industries. However, perhaps the least known among the use-cases of semiconductor chips is the case of F1 cars. From its design to on-track performance, a grand prix car needs time-intensive research and development to stay competitive year on year. Amidst several components that contribute to the success of this car, semiconductor chips play a critical role in sensors, telemetry units and electronic control units (ECUs). Tiny But Mighty F1 is perhaps the most data-driven sport in the world at present. The data in these cars is generated from a variety of sources but primarily from the sensors. More than 250 sensors are typically fitted onto the car during a race weekend, which are further divided into three main categories: control, instrumentation and monitoring. Working in symbiosis, these sensors deliver pressure, temperature, inertial and displacement data along with generating data on measurements of physical quantities (i.e., temperature, pressure, torque, speed) and the operation of the system (i.e., the internal state of the car such as the gearbox). Physically connected through an analog system to the ECU that runs the whole car or through a network of buses, called CAN buses, that bring information back to the ECU, these sensors are embedded into all the systems of the car. The sizes of the sensors vary according to their function and type. For instance, there is an FIA-mandated TPMS system, which measures tyre pressures and is installed inside the wheels. In addition, the car also carries small, thermal-imaging sensors mounted on the wings and floors that measure the surface temperature and the degradation of the front and rear tyres. It is important to note that drivers are also treated as points of data when it comes to the performance of an F1 car and wear 3mm sensors sewn into the palm of their glove fabric to monitor and record their vital signs during the race. Data from every such sensor then helps enhance car performance, formulate race-winning pit stop and tyre strategies, and optimise overall on-track drivability for the pilot. Scaling Mountains of Data In the duration of two days, a single car produces a terabyte or more of data, which includes ancillary information such as video/media output. However, live data generated by the car while it’s running is nearly 30 megabytes per lap of live data and two~three times more is collected once the car is in the pits/garage and the team offloads the remaining data. On an average race weekend, nearly 11 terabytes of data is generated. Everything is then synchronised to provide real-time insights into what’s happening at a precise time on each one of the sensors. This data is then encrypted and sent back to the team at the factory through the common telemetry systems for all F1 teams for further analysis. It is interesting to note that the telemetry system, known as ATLAS (advanced telemetry linked acquisition system) developed by McLaren Electronic Systems (MES), is common to all F1 teams. This implies that—though the data remains encrypted—communication between drivers and their on-track teams can be accessed by rival teams in real-time. In the past, drivers have used this system to bluff about their tyre health during the race to throw off rival teams. Mercedes driver and seven-time F1 world champion Lewis Hamilton is especially famous for faking tyre problems on track only to put in fastest laps. However, he believes that it is a “very fine line”. From Sensors to On-Track Performance As the car evolves through the calendar, so do the sensing requirements, and to such an extent that existing technologies often do not suffice. The electronics department for F1 cars therefore develops bespoke sensors and data acquisition systems in-house to provide valuable information that can then be deployed to improve car performance. These updates in the car setup also have a direct impact on the team’s success. Aside from these self-developed sensors, F1 teams also have strategic partnerships with semiconductor companies such as Qualcomm–Ferrari, AMD–Mercedes, Cadence–McLaren among others to enable experimentation with new solutions on track and help develop a car that stresses the system to its limit. Gamechangers It is not only what the sensors communicate but the rate at which they communicate this information that makes a difference. The rate of data depends on the type of sensor and the category which can range anywhere from 1hz to 1 kilo hz. These sensors can also be increased significantly if, for instance, the teams are interested in collecting data about vibration which can then be sampled up to 200 kg per second to detect the g forces on drivers. If one were to juxtapose this gathered data to everyday usage of devices, the amount of video information and data that the teams get out of the car might not be much. However, what matters is that every bit of information in the data stream represents important aspects of car performance which are then closely monitored by the teams back at the factories. For example, Mercedes faced significant aerodynamic issues at the beginning of the 2022 season which they then overcame with the help of the data gathered from the sensors on their cars. To understand what an F1 car does and how the teams manage their data, the best parallel could be the mission control for a spacecraft. The modern F1 car is evidently simpler than a spacecraft system but several similar principles apply to both machineries. Much like the spacecraft, F1 teams monitor a range of complex systems along with the humans who operate them. It is noteworthy that the data link from an F1 car is similar to the data link from a spacecraft in terms of bandwidth. Anecdotally, the amount of data collected by a modern F1 car over the course of a single race is more than what was collected across the Apollo space programme.
|
Amidst several components that contribute to the success of F1 cars, semiconductor chips play a critical role in sensors & telemetry units
|
["IT Services"]
|
["AMD", "Qualcomm", "semiconductor chips", "Semiconductor India", "Sensors"]
|
Akanksha Sharma
|
2023-01-10T12:00:00
|
2023
| 1,006
|
["Go", "AMD", "TPU", "programming_languages:R", "AI", "data-driven", "llm_models:PaLM", "programming_languages:Go", "semiconductor chips", "RAG", "Semiconductor India", "Qualcomm", "ViT", "Sensors", "R"]
|
["AI", "RAG", "TPU", "R", "Go", "ViT", "data-driven", "llm_models:PaLM", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/it-services/chips-fuel-f1-cars-record-performances/
| 2
| 10
| 0
| true
| true
| true
|
56,485
|
Why You Need A Mentor To Shape Your Data Science Career
|
Data scientists today struggle to devise a long term career roadmap to thrive in the data science landscape. Often aspirants plan their next move in data science by reading online articles, talking to their friends, or by looking at other professionals. While such strategies may deliver excellent results on some occasions, it can lead to a lot of failures. Consequently, data scientists fail to make the most out of the opportunities in the domain. To hit fewer roadblocks in their career, one should seek help from mentors. However, today, data scientists focus on sharpening their skills more than looking for a good mentor. Undoubtedly, since the data science domain only picked up in the last few years, it wasn’t straightforward to find mentors. But the scenario has changed now. Today, one can use various platforms to engage in with the data science community and quickly get mentoring. However, data scientists only use such platforms to solve their technical problems and not career-related challenges. But in 2020, developers should start looking for mentors to make the right decisions while moving ahead in their career. To help you understand the importance of mentors, we have laid down how you can gain an advantage in the ever-changing data science market by obtaining proper guidance. Mentor Shows You The Right Direction Mentors show you the right direction syncing market requirements and your skills. Data scientists easily get perplexed in the vast data science domain, failing to make decisions to progress. This is where mentors can play a significant role in your career. Mentors can assist you in creating a plan that can help you in approaching your goal within a set time frame. The adage ‘A job well planned is half done’ fits well in this situation. One needs a perfect pan to be confident of the path for executing enthusiastically. A weak plan will create confusion, thereby hinders performance. Consequently, having a mentor will enable you to take advantage of someone who has witnessed the evolution of the landscape. Assess Your Progress Since the learning path is endless, it is essential to know when to decrease the pace of learning new things and indulge in mastering specific techniques. Continuously learning new technologies will make one jack of all trades and master of none. Thus, occasionally one needs to access the progress, and if required, the plan needs to be altered. Mentors’ gain experience over the years, which enable them to analyse your progress and suggest effective strategies accordingly. Mentor Is A Motivational Force Data scientists usually burn themself out while trying to master the skills to gain a competitive advantage. Besides, setbacks in the journey of data science can lower the self-esteem of developers. In such situations, mentors’ help is of utmost importance to get going. To find a way out of the problems and continue again without the lack of enthusiasm, one needs the support for someone who has learned to overcome such challenges. “One can also progress without a mentor, but it slackens the advancement,” said Bastin Robin, a chief data scientist at CleverInsight. “I was fortunate enough to have Anand, the CEO of Gramener, as my mentor. He has been the go-to man for guidance.” Connects To Others Professionals Having a mentor has other perks, too; you can take advantage of their connections and get help from other prominent professionals. Mentors can bring two or more people together for mutual aid, resulting in obtaining different perspectives and indulging in brainstorming. Expanding connection in data science is vital for learning from various experts of numerous techniques. Outlook In 2020, data scientists should focus on getting mentoring by prominent data scientists. One can use platforms like LinkedIn, Twitter, among others, to get a mentor by engaging in different data science-related posts. However, putting your work on these platforms will also assist you in catching the eye of other professionals, resulting in increasing the chance of getting a potential mentor. Therefore, along with data science domain knowledge, one should have a mentor to thrive in their career.
|
Data scientists today struggle to devise a long term career roadmap to thrive in the data science landscape. Often aspirants plan their next move in data science by reading online articles, talking to their friends, or by looking at other professionals. While such strategies may deliver excellent results on some occasions, it can lead to […]
|
["AI Trends"]
|
["data science mentor"]
|
Rohit Yadav
|
2020-02-11T19:30:00
|
2020
| 674
|
["Go", "data science", "programming_languages:R", "AI", "data science mentor", "programming_languages:Go", "ViT", "R"]
|
["AI", "data science", "R", "Go", "ViT", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-trends/why-you-need-a-mentor-to-shape-your-data-science-career/
| 2
| 7
| 1
| false
| false
| false
|
10,077,069
|
Another So Called ‘Fast’ JS Framework, But Is It Better Than Next.js
|
There is a race going out there in the tech world—who renders web apps faster. React, React Native, Gatsby, Next.js, and Astro are all huddling to outrun each other to help developers build a faster-loading web app. The latest is ‘Remix’, a React-based framework for developing web applications, which is also trying to catch up in the race. Remix allows developers to render code on the server which tends to result in better performance and search optimization as opposed to using React only on the client side. It seems as though the problem has already been solved by Next.js, however the company has been trying to do it in a different way. It only does server side rendering (SSR). In addition, it doesn’t do static site generation or incremental static regeneration like NextJs does. That’s somewhat interesting considering JAM stack applications that use static generation have been quite popular in recent years and have also delivered better performance. They are fast and easy to deploy, but the biggest challenge with it is that developers have to build those pages whenever the data changes. With Remix, it goes all on to server-side rendering, which means developers will need to hire an actual server to run the application. Why Remix is solving the problem via SSR In server-side rendering, the server’s response to the browser is the HTML of the page that is ready to be rendered. It means that the browser will start rendering the HTML from the server without having to wait for all the JavaScript to be downloaded and executed. Then, React will need to be downloaded and go through the process of building a virtual dom and attaching events to make the page interactive while the user can start viewing the page simultaneously as all of that is happening. According to experts, a server-side rendered (SSR) application makes it possible for pages to load more quickly, thereby enhancing the user experience. Since content may be rendered server-side before the page loads, this is excellent for SEO because search engines can simply index and crawl the content. Web browsers give priority to web pages with faster load speeds, so web pages are correctly indexed. Besides, for users with sluggish internet connections or obsolete equipment, server-side rendering makes it easier for web pages to load quickly. (Server Side Rendering) While there are certain advantages with this approach, there are disadvantages too. The risks involved As server-side rendering is not the norm for JavaScript websites, the server bears the full cost of rendering content for users and bots, which can be both expensive and resource-intensive. Although it is effective to render static HTML on the server, doing so for larger, more complicated apps can cause load times to increase. Besides, third-party JavaScript code might not be compatible with server-side rendering. Even though rendering on the server is appropriate for static site generation, more complicated apps may have an overall slower page rendering due to frequent server calls and entire page reloads. Is Remix better than Next.js? Since websites often consist of numerous pages that display either static or dynamic content, ‘routing’—or, the process of moving between different pages within a website—is a crucial function. A file-based route, where the user generates a file and it is immediately accessible via their browser, is supported by both Remix and Next.js. For instance, after bootstrapping a new project in Remix, you can add a new file to the routes folder. Additionally, both frameworks feature client-side route navigation—which enables users to access pages without having to reload their browsers—along with dynamic routes. Remix uses nested routes, which makes it unique. Although Remix enables the user to design a hierarchy of routes where each route is a separate file that can choose where its children should be shown, Next does support nested routes from a file perspective—implying that the user can have several active routes on a single page. Remix offers most of the functionality offered by Next but it makes an additional effort to steer clear of React by offering a higher level of abstraction. Conversely, Next does not try to hide the fact that it is completely dependent on React. Next may be the preferable option for React experts as it deals with more well-known ideas, but for new developers who are less experienced with React, it may be far simpler to begin by utilising Remix directly. The age of each framework is another inevitable reality. Remix is just an entrant into the game while Next has been there for nearly half a decade. The age factor allowed Next to add a lot of performance optimisation such as inline-font optimization, image optimization and more into it—which Remix just hasn’t had the time to add yet.
|
Remix allows developers to render code on the server which tends to result in better performance and search optimization.
|
["AI Trends"]
|
["Astro"]
|
Tausif Alam
|
2022-10-12T12:00:00
|
2022
| 791
|
["Go", "programming_languages:R", "AI", "ML", "programming_languages:Java", "Astro", "ViT", "JavaScript", "R", "Java", "programming_languages:JavaScript"]
|
["AI", "ML", "R", "JavaScript", "Go", "Java", "ViT", "programming_languages:R", "programming_languages:JavaScript", "programming_languages:Java"]
|
https://analyticsindiamag.com/ai-trends/another-so-called-fast-js-framework-but-is-it-better-than-next-js/
| 3
| 10
| 0
| true
| false
| false
|
66,650
|
How AI Surpassed Humans In Playing Flappy Bird Game
|
Reinforcement learning has exceeded human-level performance when it comes to playing games. Games as a testbed have rich and challenging domains for testing reinforcement learning algorithms that start with a collection of games and well-known reinforcement learning implementations. Reinforcement learning is beneficial when we need an agent to perform a specific task, but to be precise, there is no single “correct” method of accomplishing it. In a paper, researcher Kevin Chen showed that deep reinforcement learning is very efficient at learning how to operate the game Flappy Bird, despite the high-dimensional sensory input. According to the researcher, the goal of this project is to get a policy to have an agent that can successfully play the bird game. Flappy Bird is a popular mobile game in which a player tries to keep the bird alive for as long as possible while the bird flaps and navigates through the pipes. The bird automatically falls towards the ground due to gravity, and if it hits the ground, it dies, and the game ends. In order to score high, the player must keep the bird alive for as long as possible while navigating through obstacles — pipes. Also, training an agent to successfully play the game is especially challenging because the motive behind this task is to afford the agent with only pixel information and the score. AI Playing Flappy Bird The researcher did not provide any information about what the bird or pipes look like to the agent, and the agent must learn these representations and directly use the input and score to develop an optimal strategy. The goal of reinforcement learning is always to maximise the expected value of the total payoff or the expected return. In this research, the agent used a Convolutional Neural Network (CNN) to evaluate the Q-function for a variant of Q-learning. The approach utilised here is the deep Q-learning in which a neural network is used to approximate the Q-function. As mentioned, this neural network is a convolutional neural network which can also be called the Deep Q-Network (DQN). According to the researcher, an issue that arises in traditional Q-learning is that the experiences from consecutive frames of the same episode, which means that a run from start to finish of a single game is very correlated. This, in result, hinders the training process and leads to inefficient training. To mitigate this issue and de-correlate the experiences, the researcher used the experience replay method for storing every experience in the replay memory of every frame. Behind Deep Q-Network The Q-function in this approach is approximated by a convolutional neural network, where this network takes as input an 84×84×historyLength image and has a single output for every possible action. The first layer is a convolution layer with 32 filters of size 8×8 with stride 4, followed by a rectified nonlinearity. The second layer is also a convolution layer of 64 filters of size 4×4 with stride 2, followed by another rectified linear unit. The third convolution layer has 64 filters of size 3×3 with stride 1, followed by a rectified linear unit. Following these layers, the researcher achieved a fully connected layer with 512 outputs along with an output layer that is also fully connected with a single output for each action. Wrapping Up The metric for evaluating the performance of the DQN is the game score i.e. the number of pipes passed by the bird. According to the researcher, the trained Deep Q-Network played extremely well and even performed better than humans. In comparison to human players, the scores for human and DQN are both infinities for the easy and medium difficulties, while the DQN is better than a human player because it does not have to take a break and can play for 10+ hours at a stretch.Read the paper here.
|
Reinforcement learning has exceeded human-level performance when it comes to playing games. Games as a testbed have rich and challenging domains for testing reinforcement learning algorithms that start with a collection of games and well-known reinforcement learning implementations. Reinforcement learning is beneficial when we need an agent to perform a specific task, but to be […]
|
["Deep Tech"]
|
["Reinforcement Learning", "reinforcement learning models", "Reinforcement Learning Systems"]
|
Ambika Choudhury
|
2020-06-04T12:00:00
|
2020
| 636
|
["Reinforcement Learning Systems", "Go", "TPU", "Reinforcement Learning", "programming_languages:R", "AI", "neural network", "programming_languages:Go", "reinforcement learning models", "ViT", "CNN", "R"]
|
["AI", "neural network", "TPU", "R", "Go", "CNN", "ViT", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/deep-tech/how-ai-surpassed-humans-in-playing-flappy-bird-game/
| 4
| 9
| 0
| true
| true
| true
|
66,883
|
ZEE5 Denies The Reported Breach Of Its Network
|
As per QuickCyber, a popular OTT platform — ZEE5 — was allegedly breached by a hacker called John Wick. It was reported that the hacker got hold of a staggering 150 GB of information, including the source code of the ZEE5’s website. Over 150 million users have subscribed to the ZEE5 platform from across the world. While it is not known the exact number of users’ information the hacker has obtained, information related to messages, passwords, emails, mobile number, and the transaction has been breached. It is believed that the hacker has planned to disclose the information by selling it on the public domain. However, Head of technology of ZEE5, Tushar Vohra said that they are investigating the reported claims about the breach. “We are also cognizant of the fact that the OTT sector has exploded in the past few years, so has hackers’ interest in it. Especially, post-COVID-19 outbreak, data hacks have been on a steady rise. But, it is a shallow attempt to gain a vested interest.” It was claimed that the hacker has also shared the sample of data with a media house that broke the story, which consists of secret keys and credentials of the AWS bucket. As per the sample data, the last update of the database was on 24th April, which indicates that users who subscribe to the OTT platform post-April might be safe. The shared information also reveals the Korean hacker possesses ZEE5’s code repository on bitbucket.org. Earlier, it was proclaimed that the hacker was able to access the database and extract all the information related to the payments. The hack also brought one of ZEE5’ partners — Axinom — under the spotlight. Axinom provides various tech stack for the OTT platform. The collaboration goes back to 2017, a few months prior to the launch of ZEE5 platform in early 2018. However, the CEO of Axinom, Ralph Wagner, said that they neither manage the database of the ZEE5 nor do any Axinom solution use MySQL database that the image of the breached information represents. ZEE5 uses Axinom’ solutions to manage content, and ZEE5 software for the website is operated by ZEE5. Nevertheless, Axinom will investigate the instance further and release a statement as soon the investigation is complete.
|
As per QuickCyber, a popular OTT platform — ZEE5 — was allegedly breached by a hacker called John Wick. It was reported that the hacker got hold of a staggering 150 GB of information, including the source code of the ZEE5’s website. Over 150 million users have subscribed to the ZEE5 platform from across the […]
|
["AI News"]
|
["sql v mysql"]
|
Rohit Yadav
|
2020-06-08T19:03:07
|
2020
| 375
|
["Go", "AWS", "AI", "cloud_platforms:AWS", "programming_languages:R", "programming_languages:Go", "programming_languages:SQL", "sql v mysql", "Aim", "SQL", "R"]
|
["AI", "Aim", "AWS", "R", "SQL", "Go", "cloud_platforms:AWS", "programming_languages:R", "programming_languages:SQL", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-news-updates/zee5-denies-the-reported-breach-of-its-network/
| 2
| 10
| 0
| false
| false
| false
|
10,044,444
|
Why I Quit Data Science
|
Software developer Sufyan Khot’s LinkedIn post titled ‘I quit data science’ sparked a lively debate on the platform. At the time of writing this article, the post had garnered close to 1,400 ‘likes’ and over 90 comments–a testament to how the post struck a chord with a large number of people. Quitting data science “Harvard Business Review hailed Data Scientist as the sexiest job of the 21st century. LinkedIn and Glassdoor continue to rank data science as one of the top professions. The median experience of people in the profession is probably less than two years, but the power it wields over the industry is huge. An average data scientist salary in the US is roughly $117 – 120K, much higher than what an experienced software developer might have. So these are two powerful metrics in terms of data science as a lucrative career from a youth’s perspective,” said Shekar Murthy, Senior VP, Presales, Solution & Professional Services, Yellow.ai said in an earlier interview with Analytics India Magazine. That said, data science is not for everyone. Khot realised it the hard way when he quit his regular job as a software engineer to pursue data science. “At my first job in 2019, I saw a team of data scientists at my organisation. I was very intrigued by the job overall and felt I had the aptitude to be one of them given my interest in mathematics, programming, and statistics. I zeroed in on a six-month course in data science because of which I had to quit my job.” Khot did exceedingly well in both theory and practicals. However, at the fag end of his course, he knew it’s not his cup of tea. “There were two main reasons for this decision. Firstly, a large part of a data scientist’s job is quite monotonous, especially cleaning and processing raw data. A few estimates suggest that a data scientist spends as much as 80 percent of his/her time doing that. Secondly, despite reports of companies just waiting out there to hire data scientists was not true in my case. I felt that good jobs were far and few in between.” Khot also warns aspirants to not fall for the glamour element of the job. Now, Khot is back to his old line of work. Tech Failures In Analytics, AI And IoT Calling It Quits In 2017YOLO Creator Quits AI Research Citing Ethical ConcernsNeed For A Balanced Perspective On Big Tech As A ConsumerOrganizations Continue To Face Challenges With Big Data: Lets Deep DiveAutonomous Ships, World’s Smallest Chips And More In This Week’s Top News From recruiter’s perspective Harsh Gupta worked as a data scientist at prestigious organisations such as WWF and John Hopkins for six years. Gupta is currently the founder and CEO of Oklahoma-based organisation ProtoAutoML, an autoML software provider. Gupta shares how he was a 20-year-old graduate who spent the first six months being a data scientist carrying monotonous data cleaning and processing tasks. “In those six months, I was required to make exactly one regression model,” he said. “Many companies do not have the proper machine learning tools and/or still rely on legacy systems. A data scientist entrant is probably coming from an academic world and may already be exposed to platforms like Kaggle, GitHub, and other open-source projects. So they may come with some unrealistic expectations. They would want to straight away work on high-end projects, while in reality a large part of their time will be spent in making sense of the data,” said Gupta. He feels companies are also at fault as they often fail at clearly defining the job role being hired for. For example, he said companies use buzzwords like AI, machine learning, and data science while advertising for a job but, in reality, may expect their employees to also work on the Tableau and business intelligence side. He said clerical jobs related to preparing the data can be easily automated so that data scientists can work on more skill-based processes. For his organisation, Gupta said, he makes the job responsibilities clear from the start. “I only hire people who have had considerable experience in working with data science projects and have a GitHub of their own. This tells a lot about the candidate’s experience with data science, making it easy for both the employee and employer to set the right expectations,” he said.
|
LinkedIn and Glassdoor continue to rank data science as one of the top professions.
|
["AI Highlights"]
|
["Data Science", "Data Science Career", "Data Science Jobs", "Data Scientist Jobs", "data scientist salary in india", "open source data science projects"]
|
Shraddha Goled
|
2021-07-26T10:00:00
|
2021
| 730
|
["data science", "Go", "Data Scientist Jobs", "open source data science projects", "machine learning", "AI", "ML", "Data Science Jobs", "Git", "RAG", "Data Science Career", "data scientist salary in india", "analytics", "Data Science", "GitHub", "R"]
|
["AI", "machine learning", "ML", "data science", "analytics", "RAG", "R", "Go", "Git", "GitHub"]
|
https://analyticsindiamag.com/ai-highlights/why-i-quit-data-science/
| 2
| 10
| 2
| false
| true
| false
|
10,115,547
|
Even AI Doesn’t Care About the Middle Class
|
Entrepreneur and investor Marc Andreessen’s recent post on X showcasing the ‘likelihood of content moderation flagging’ in OpenAI has stirred users, especially in discussions about the ongoing bias infiltrating AI models. If you look at the bottom of the chart where categories are least flagged – those that ‘nobody cares about’– features the middle-class, a section that seems to be nobody’s concern. Interestingly, wealthy people and the Republicans, too, are languishing at the bottom. Source: X The categories that top the chart have been classified as sensitive content, likely to be flagged. These also showed up in the Google Gemini fiasco, which made headlines for the inaccurate depictions of historical figures. For instance, Black Nazis and Black George Washington have been some of the outputs generated by Gemini. The results now seem like an overcompensation for those critical categories, or rather an over-representation. In India, even after Google apologised for Gemini generating a biassed response against Prime Minister Narendra Modi, the AI model continued to give biassed responses, especially against two prominent political parties in India. Source: Gemini Elon Musk, who retweeted Andreessen’s post, also highlighted the concerns about biases being programmed into AI. Andreessen believes that this layer of intrusion is ‘designed to specifically alienate its creator’s ideological enemies’. However, the concept of bias is not that simple, with no single solution. Source: X Why the Bias? François Chollet, author and deep learning engineer at Google, believes that bias in ML systems can arise from bias in the training data, however, that is only one possible source of bias among many. Prompt engineering can be considered a worse source of bias. “Literally, any part of your system can introduce biases. Even non-model parts, like your evaluation pipeline. How you choose to evaluate your model shapes what kind of model you get,” posted Chollet. When the fiasco had begun, people prompted Gemini to admit that there was another layer of ‘inner prompt’ (in addition to user prompt), that ultimately added to the biassed outputs. Investor and entrepreneur Mark Cuban has also weighed in on this manner. He believes that “no LLM will ever be bias-free or completely objective”. He believes that the preferences and decisions of the market, buyers, and users ultimately shape the perceived bias and desired intelligence in the models they choose. Meta AI chief Yann LeCun recently said that it is not possible to produce an AI system that is not biassed. “It’s not because of technological challenges, although there are technological challenges to that, it’s because bias is in the eye of the beholder,” he said. LeCun believes that different people hold different perspectives regarding the definition of bias across numerous contexts. “I mean, there are facts that are indisputable, but there are a lot of opinions or things that can be expressed in different ways.” LeCun also regards open-source as the answer to the problem. Source: X Open-Source to Show the Way LeCun, a firm believer of open-source projects, has expressed his sentiments on how open-sourcing models with inputs from a large group of people, including individual citizens, government organisations, NGOs, and companies, will result in a ‘large diversity of different AI systems’. However, open source is only a potential solution to mitigate bias. Musk has also been in favour of open source projects. “I am generally in favour of open sourcing, like biassed towards open sourcing,” he had said in an interview with Fridman earlier. Interestingly, Musk’s concern about voicing bias being programmed into AI models comes at a time when xAI announced open-sourcing Grok. The efforts to push open-source will be even more now, considering Musk’s ongoing battle with OpenAI regarding the company’s shift in ‘openness’. No End to AI Bias Despite the efforts to mitigate the problem, bias in AI models will continue. In a recent research conducted by Bloomberg, name-based discrimination was observed when ChatGPT was tested as a recruiter. Resumes with names distinct to Black Americans were least likely to be ranked as top candidates for a financial analyst role compared to resumes with names from other races or ethnicities. The bias, which exists across image-generation tools, was also tested by Bloomberg in 2023. In the analysis where 500+ AI-generated images were created using text-to-image tools, it was noticed that the image sets generated for subjects were divisive. For e.g., light-skinned subjects were generated for high-paying jobs. As long as AI bias persists, tipping the scale towards a particular segment of society, the middle class will continue to be ignored. Unfortunately, when considering job displacements owing to AI, it appears that middle-class jobs are the ones most affected. It seems there is no respite anywhere.
|
Is AI model bias only in the eye of the beholder, or are certain social segments deliberately neglected?
|
["AI Features"]
|
["OpenAI"]
|
Vandana Nair
|
2024-03-13T16:21:19
|
2024
| 775
|
["ChatGPT", "Meta AI", "TPU", "OpenAI", "AI", "R", "ML", "prompt engineering", "deep learning", "xAI"]
|
["AI", "ML", "deep learning", "ChatGPT", "OpenAI", "Meta AI", "xAI", "prompt engineering", "TPU", "R"]
|
https://analyticsindiamag.com/ai-features/even-ai-doesnt-care-about-the-middle-class/
| 2
| 10
| 1
| true
| false
| true
|
10,098,226
|
Exploring Meta’s AI Endeavours: From Personas to Advantage+ & More
|
Meta is gearing up to unveil a fleet of AI-powered chatbots that are about to add a sprinkle of innovation to your social media experience. These chatbots, set to grace platforms like Instagram and Facebook in the coming months, are primed to bring a fresh twist to online interactions and engagements. With this, Meta aims to make your time on these social media channels much more engaging and interactive, steering away from the dull and mundane experience. These AI-driven companions are not just any ordinary chatbots but are equipped with distinct personalities that will leave an indelible mark on your conversations. For instance, imagine getting travel advice from a surfer-dude-style persona or engaging in a dialogue with a chatbot channeling the wisdom and wit of Abraham Lincoln himself. It’s all part of Meta’s plan to create digital buddies that feel a little more human. According to reports, these chatbots, being referred to as “personas” within Meta, are all set to make their grand entrance into the social media scene this September. “Over the longer term, we’ll focus on developing AI personas that can help people in a variety of ways,” Zuckerberg wrote in a Facebook post. Instagram is also testing a new feature to label AI-generated content, aiming to increase transparency. Screenshots shared on X microblogging platform by Alessandro Paluzzi reveal that Instagram will flag AI-created text, images, and videos. It’s uncertain if the label will only apply to content produced using built-in AI tools or if it can identify all AI-generated content. The feature could be based on Meta’s Llama 2, an open-source AI model. This move is intended to make it clear when content has been generated by AI on the platform. Meta, along with seven other tech companies, pledged to watermark AI content, and looks like its research work on this is coming through. This initiative follows Meta’s launch of Threads, a Twitter rival app that made quite a splash but faced a rather dramatic drop in users shortly after its much-anticipated release. However, despite the challenges, Meta has been raking in profits like a pro. Advertising & Meta is all for it Meta’s main source of revenue is advertising, and its straightforward integration of AI across its apps positions it well to capitalise on AI. Additionally, their advertising game has been strong, boasting a 34% increase in ad impressions across their suite of apps during the second quarter of 2023. But here’s the twist – the average price per ad has actually taken a 16% dip during the same period. Change is afoot, and Meta seems to be playing it smart. Meta’s cooking up some new ad tools to give advertisers a boost with the help of AI. The company’s AI Sandbox project, being tested with a small group of advertisers, introduces features like Text Variation, which generates multiple ad text variations to optimize performance. Background Generation creates product image backgrounds from text inputs, and Image Outcropping adjusts visuals to fit different formats like Stories or Reels. These tools aim to provide advertisers with more creative options and complement existing processes, utilizing the power of generative AI while offering more choices to consider. However, caution is needed as there can be limitations and occasional errors in the generated content. Meta is also expanding its Advantage+ targeting for advertisers, adding new options to reach target audiences more effectively. Advertisers will soon be able to switch between manual and Advantage+ campaigns with a single click, and Catalog Ads for Advantage+ campaigns will support video elements. A Performance Comparisons report will provide insights into manual vs Advantage+ campaign performance, and additional manual inputs will guide the system for better audience targeting. Meta is leveraging larger and more complex AI models within its ad system, enabling optimisation across different surfaces (Feed, Story, Explore, and Reels), resulting in improved conversions and ad quality. Beyond just upping the engagement game, these AI-powered pals have an additional trick up their sleeves – data collection. As you engage in conversations with these chatty companions, they’re silently gathering insights into your interests. This treasure trove of information could then be put to use by Meta to tailor content and ads to fit your preferences, creating a more personalised digital experience. With all the talk about personas and engagement, let’s not forget the financial front. Meta’s Q2 2023 results have been nothing short of impressive. They’ve notched up a revenue of nearly $32 billion, a cool 11% jump from the same period last year. While expenses have climbed, operational income has also shown a steady rise of 12%, reaching around $9.4 billion. And let’s not forget about the net income that’s shot up by a whopping 16%. But it’s not just about the numbers; it’s about vision. Mark Zuckerberg, the CEO, has his sights set on the horizon, with exciting projects lined up, from Threads to Reels and much more. As the digital realm evolves, Meta is harnessing the power of AI to bring a little more spark into your social media experience. These AI companions are set to reshape how we interact online, adding a dash of personality to our virtual conversations. However, it’s interesting how Twitter’s all about getting rid of those pesky bots that clutter up the platform. They want a cleaner space for everyone to chat and share—or so they claim. Zuckerberg’s got his eye on bringing in some bots for ads on his platforms. Meanwhile, Sam Altman has a cool plan afoot. He wants to tell real humans from those tricky bots using his Worldcoin orb. So, it’s like a bot banishing act versus a bot-friendly vibe, with Altman trying to sort them all out in the mix!
|
Meta’s main source of revenue is advertising, and its straightforward integration of AI across its apps positions it well to capitalise on AI
|
["AI Features"]
|
["Advertising", "AI Generated Content", "AI Powered Chatbots", "Facebook", "instagram", "Mark Zuckerberg", "Meta", "Revenue", "Transparency"]
|
Shyam Nandan Upadhyay
|
2023-08-08T12:44:10
|
2023
| 947
|
["API", "Meta AI", "chatbots", "AI Powered Chatbots", "Git", "instagram", "R", "RAG", "Mark Zuckerberg", "Facebook", "Go", "Meta", "AI", "Advertising", "generative AI", "Revenue", "Aim", "Transparency", "AI Generated Content"]
|
["AI", "generative AI", "Meta AI", "Aim", "RAG", "chatbots", "R", "Go", "Git", "API"]
|
https://analyticsindiamag.com/ai-features/exploring-metas-ai-endeavors-from-personas-to-advantage-more/
| 3
| 10
| 0
| false
| true
| true
|
10,119,822
|
Can Ruby Survive as the ‘Human-First’ Programming Language?
|
Ruby, a general-purpose programming language, and Rails, a framework for creating websites, apps, and systems recently released version 7.1.3.2, which addresses several security issues, as well as the ongoing efforts to improve the language’s performance and currency. This was done through features like YJIT (Yet Another JIT Compiler) and Ractors (Ruby’s implementation of the Actor model), demonstrating the community’s commitment to keeping the framework relevant and up-to-date. The language programmed to build websites and apps is used extensively by platforms like Shopify. It uses over 2.8 million lines of Ruby code and 500,000 commits. Besides, the entire backend of Airbnb was built on Ruby until 2018, when it pivoted some parts to Golang. It is also used by Netflix, GitHub, and Soundcloud. But with the conversation shifting to AI, are more developers falling off the Ruby train? The language has declined in popularity in recent years, particularly among startups. As a Ruby developer points out in a Hacker News discussion, “We are in a time where people prefer compiled, statically typed languages, which contribute to Ruby losing its popularity; that’s why alternatives like Crystal are growing.” Despite the efforts to keep the language up to date, its tight coupling with the Rails framework, which is resource-intensive and rigidly monolithic in architecture, is the reason for its unpopularity. Another user on the same Hacker News thread suggested — “Ruby should really try to separate itself from it and shine on its own.” This close association has led to a perception that Ruby is primarily a web development language, limiting its appeal to developers working on other types of projects. Is there a solution? The Ruby community, however, remains dedicated to improving the language and framework. In a recent interview, David Heinemeier Hansson (DHH), the creator of Ruby on Rails, discussed the future of the language, suggesting that Ruby’s ‘human-first’ approach makes it well-suited for developers looking to remain relevant as AI becomes more prevalent in the industry. “As we are now facing perhaps an existential tussle with AI, I think it’s never been more important that the way we design programming languages is designed for people,” DHH stated. This approach includes principles like Convention Over Configuration, which minimises the decisions developers need to make. It also embraces Integrated Systems, where Rails provides a cohesive stack with pre-selected tools that work well together, reducing the setup and configuration tasks. Despite all this, the usage of language is on a steady decline. According to a Stack Overflow survey, the language’s popularity fell from 8.4% in 2019 to 6.2% in 2023. While the promise of AI-powered development is alluring, it’s crucial to consider the potential pitfalls, particularly for a language like Ruby and a framework like Rails. “AI seems like the last nail in the coffin for an easy, slow-evolving, highly standardised ecosystem like Rails,” argued a user on Hacker News. The ease and simplicity that made Rails attractive in the first place might work against it in an AI-driven world. If AI can handle the boilerplate and heavy lifting, the value proposition of Rails diminishes. Moreover, the rapid pace of AI advancements may not align well with Ruby’s slower, more deliberate evolution. As a detailed Reddit post noted, “I don’t think it’ll ever go back to being the primary driver of startups, as the world has passed it by.” As developers flock to languages and frameworks that can keep up with the breakneck speed of AI innovation, Ruby risks being left behind. However, it’s not all doom and gloom. Ruby’s emphasis on developer happiness and its thriving community are assets that shouldn’t be discounted. Rather than trying to compete head-on with the latest AI-centric languages, Ruby’s path lies in doubling down on its strengths – its expressiveness, readability, and human-centric approach. DHH believes that Ruby on Rails will continue to evolve and adapt to the changing tech landscape, emphasising the importance of simplicity in web development. He envisions a future where “individual programmers can understand the entire system that they’re working on”. He further noted the importance of open-source collaboration and community-driven development, stating, “Ruby on Rails, from end-to-end, should be a free and open-source software that is not owned by any commercial entity. Then we can all work together to improve. We should never accept that something is too hard that it has to be done by commercial vendors.” By focusing on integrating AI in a way that enhances rather than replaces the developer experience, Ruby can carve out a unique niche in the AI era.
|
Ruby’s human-centric approach positions it uniquely for the AI-driven future of software development.
|
["AI Features"]
|
["Ruby"]
|
K L Krithika
|
2024-05-08T12:15:19
|
2024
| 754
|
["Go", "API", "programming_languages:R", "AI", "innovation", "programming_languages:Go", "Git", "Ruby", "GitHub", "R", "startup"]
|
["AI", "R", "Go", "Git", "GitHub", "API", "innovation", "startup", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-features/can-ruby-survive-as-the-human-first-programming-language/
| 4
| 10
| 3
| false
| false
| false
|
12,241
|
5 Ways Big Data Plays a Major Role in the Media and Entertainment Industry
|
Deliver a personal experience is the ultimate motive of any entertainment and media company. With smartphones and associated digital media becoming the major source of entertainment, media creators and distributors must embrace Big Data Analytics to create a connection with their customers. It will help unlock hidden insights about customer behaviour and facilitate achieving the ultimate goal – delivering personalized content. According to CloudTweaks, Facebook collects and processes 500 TB of data daily. Google processes 3.5 Billion requests daily. Amazon receives 152 Million customer purchase data daily. Hence proved, that data volumes are skyrocketing on a daily basis. Big Data is too big to ignore. If harnessed, it can be used as a massive force for boosting your business. For forward-thinking media and entertainment companies, Big Data holds the key to future business profitability. Who can Gain Big from Big Data? Almost every media associated business with large volumes of data can leverage Big Data to its benefit. The largest benefactors of Big Data in the media and entertainment industry will be: Video Publishers Independent or private video creators who publish content including video, audio, text and images. Media Owners Businesses that own copyright to sell content that can be sold through retail or mass content distribution mediums. Gaming Companies Online or offline video game makers that can log gamer reactions for fine tuning gaming experience. TV Channels Television channels that broadcast owned or purchased video content to a mass population. In this article, we explore the several ways how Big Data is helping entertainment and media industry make sense of the massive flood of data that gushes in from multiple sources. 1. Predicts Audience Interests Traditionally media content was served only in limited forms. Today, they are replaced by myriad media services like pay per view, on demand, live streaming and much more. In the process of content delivery across these forms, broadcasters also collect a vast amount of user data which can give in depth understanding of behaviour and preferences. According to the statistics compiled by YouTube, it has been found that YouTube’s lion share of viewers is audience falling in the age group of 18 to 34 years. YouTube has also unearthed several other interesting statistics about its users like what kind of videos viewers watch the most, what devices are used for video streaming, how long each video was watched and much more. How did YouTube know so much about its users in such a minute manner? It is Big Data and Analytics that is at play. Big Data throws deep insights into YouTube’s audience behaviour and helps it to syndicate content that is closely aligned to their viewer preferences. 2. Insights into Customer Churn Customer churn is a serious menace that media companies find almost impossible to tackle. It has been found at least 30% of customers share their reviews through social media. Until Big Data arrived, combining and making sense of all the user data from multiple sources, including social media was next to impossibility. With the advent of Big Data, it is now possible to know why customers subscribe and unsubscribe. It is possible to know what kind of programs they like and dislike with crystal clear clarity. Deeper insights into responses towards pricing and subscription models can also be drawn with Big Data. Through Big Data Analytics; content pricing, media content and even delivery modes can be tailor made to reduced customer churn. 3. Optimized Scheduling of Media Streams The rapid growth of digital media distribution platforms has literally torn down the barrier that existed between end users and distributors. Reaching the end-users directly without any intermediary is feasible than ever before. Moreover, social networks have also set the stage for creating individual connections with viewers unlike in the past where mass distribution of media was the norm. Connecting with audience directly through scheduled media streaming can maximize revenues for media companies. Business models like on-demand and scheduled viewing can also be mastered through Big Data enabled customer behaviour analytics. Big Data Analytics help identify the exact content which customers would want to engage with on a schedule basis. 4. Content Monetization Big Data is helping media companies create new revenue sources. It arms media owners’ new avenues to capitalize on the media interests of customers. Let’s examine the success story of The Weather Channel: The Weather Channel (TWC) is a privately owned weather business co-owned by IBM. TWC uses Big Data to observe and understand customer behaviour in specific weather conditions. With the help of Big Data, TWC has fabricated a WeatherFX marketplace where sellers can advertise their products that have higher chances of selling in a given weather scenario. Presently, TWC is estimated to earn at least half of its advertising revenue with the help of Big Data analytics. Thanks to mobile profusion and bandwidth expansion, now it is possible to reach out to a larger chunk of digitally connected audience for content monetization. Big Data facilitates zeroing in on the right content that such audience will prefer. 5. Effective Ad Targeting The revenue models of media and advertising is largely dependent on programmatic advertising. All these years, programmatic advertising has been done on a random manner, with the hope that customers will like what is shown to them. Big Data takes the guesswork out of programmatic advertising. It helps advertisers and businesses pinpoint the exact preferences of customers. It also gives a better understanding about what type of content viewers watch at what time and duration. This granular visibility of customer preferences helps improve the efficiency of ad targeting resulting in higher conversion rates or TRP as the case may be. Furthermore, in a live streaming scenario, Big Data also helps advertisers to tweak their broadcasts real-time to deliver a far enriched and personalized media experience. The ‘Big’ Road Ahead Big Data can open up the lane to fast success to businesses in the entertainment and media industry. It can help negate the biggest risk factor in the industry – changing customer behaviour. Big Data can help have a steady pulse on the shifting customer preferences. It helps reduce customer churn, creates alternate revenue channels and also boosts customer acquisition and retention through data intelligence. In the end, it creates a new ecosystem where customer experience is put as the centrepiece. After all, the entire entertainment and media industry thrives on the end-user experience that it creates.
|
Deliver a personal experience is the ultimate motive of any entertainment and media company. With smartphones and associated digital media becoming the major source of entertainment, media creators and distributors must embrace Big Data Analytics to create a connection with their customers. It will help unlock hidden insights about customer behaviour and facilitate achieving […]
|
["AI Trends"]
|
["big data role"]
|
Sunu Philips
|
2017-01-20T04:38:36
|
2017
| 1,068
|
["big data", "Go", "API", "programming_languages:R", "AI", "programming_languages:Go", "Git", "RAG", "analytics", "R", "big data role"]
|
["AI", "analytics", "RAG", "R", "Go", "Git", "API", "big data", "programming_languages:R", "programming_languages:Go"]
|
https://analyticsindiamag.com/ai-trends/5-ways-big-data-plays-major-role-media-entertainment-industry/
| 3
| 10
| 3
| false
| false
| false
|
21,552
|
Exploring The Greener Side Of Big Data To Rejuvenate Our Graying Environment
|
Pollution in India has been worsening day by day. After the national capital was declared the most polluted city of the country, others may be lining up soon in this chaos created by humans. Thanks to urbanisation growing at a mercurial rate, pollution woes continue to grow in large proportions. Even though remedies are made available to address concerns regarding pollution, geographic changes brought out by humans in the name of development is only worsening the situation. This is the time when technological innovations such as predictive analytics and big data come into picture to tackle environmental issues. The need for newer technologies With data available in all sizes, shapes and baggage, it is the time to make the most out of unused data that has been piling up till now. Governing bodies such as Central Pollution Control Board(PCB),India and individual state pollution control boards which operate under them have been collecting and compiling pollution information for long. Not just that, non-governmental and private organisations such as Greenpeace and many more not-for-profit firms also publish their findings every year. According to IDC, data seems to be growing 50% more every year, with newer and unique data springing up every day. The rise in digital devices such as various industrial equipment, automobiles, electrical meters and shipping crates, is further adding to the data collected. These numbers suggest that there is no short-falling in the data that is collected, but in the meaningful insights that can be put to use to fight rising pollution. The information sometimes can be inaccurate, outdated or unorganised, worsening the situation further. Big Data to the rescue: what’s been done across the globe? This is where the need to implement big data and analytics comes into picture. There are several efforts that are being carried out across the globe by research institutes and companies, at individual capacity and in collaboration, that are bringing sense to the pollution crisis. Fighting air pollution: Academics at the University of Texas in collaboration with other universities have worked on mapping air pollution using Google’s Street View Cars. Joshua Apte, Kyle Messier and others utilised data integration methods and tools to regulate continuous monitoring using Street View Cars with high spatial resolution. The vehicles not only monitor quality but also help with detection of ultra-fine particles. The study was conducted for almost a year in commercial, residential and industrial areas with key pollutants such as black carbon particles and oxides of nitrogen, originating from industries, automobiles and even kitchen spaces. The dataset was very large, 3 X 106 measurements for an area stretched to almost 30km2. After selecting the useful data by segregating the area into road segments, they sampled it using Monte Carlo method. The results obtained show major pollutant areas with aforesaid pollutants at specific time frames, which can be made available at any instant through those monitoring cars. With air pollution rampant in Indian metro cities with air quality index (AQI) for particulate matter 2.5 (PM2.5) nearing 500, which is hazardous, the implementation of the aforementioned project will be immensely beneficial to curb air pollution. Fighting water pollution: Indiscriminate usage and industrial effluents being discharged to clean water sources will adversely impact water. In India, although wastewater and sewage disposal methods are practiced, they prove to be inefficient and insufficient due to factors such as cost of sewage treatment which have increased, and the enormous quantity of disposal. A recent publication by academics from the University of Singapore by teaming with Microsoft Research have provided a compelling solution to analyse water quality level using multiple datasets gathered from various sources. Two features are presented, called the ‘spatial view’ (pipe network structures) and ‘temporal view’ (hydraulics and meteorological information). The study works by combining these features on a water source station, to obtain a multiple view framework to predict the water quality in urban areas. The datasets are collected from six sources: water quality sites, hydraulic data, road networks, pipe attributes, meteorology and Point-of-Interests (POIs). They are optimised using a fast iterative shrinkage-algorithm. Later, the observations are captured for the data which is spread over 2 months. The study is feasible and the Indian government can definitely look into the methods presented in the study to monitor deteriorating water quality across the country. Another study conducted by academics at Kyungpook National University, South Korea suggest building smart cities using big data analytics by building a complete system with sensors. The system is implemented using Hadoop, voltDB and Storm for real time processing. The system is proposed in the form of an architecture classified into 4 tiers, that is, Bottom Tier (for data collection & generation), Intermediate Tier-1 (communication between sensors), Intermediate Tier-2 (data processing using Hadoop) and Top Tier (usage of the data analysis results). Although, it is focused on better civil establishments, the pollution factor would be kept under control if implemented. Fighting noise pollution: Again, urbanisation can be cited as the prime reason for noise pollution. With rising traffic and crumpled building spaces, sound pollution has taken a heavy toll on our lives as well as the environment these days. Not to mention the industries and machinery that add to these woes. One such study by Microsoft Research has worked on New York City’s noise pattern data. They analysed the method using four different datasets: New York’s non-emergency public helpline 311, Data from social media websites; Foursquare and Gowalla, and their road network data. They built a tensor model (various noise categories such as vehicle, loud music etc. depicted in vectors with time as a scalar entity, combined to form a tensor object). The useful data is now extracted from this model to evaluate the correlation between the noise variants. The results are shown in the form of ‘heat maps’ which show peak activity during weekdays and weekends. India could benefit highly by employing this techniques. Scenario in India: While India leads in terms of pollution, it lags behind drastically in terms of adoption of tech developments such as in big data and analytics. Even China, which faces one of the most notorious toxic air quality, is deploying emerging tech in order to gain relevant insights to manage air pollution. The collaboration with IBM China Research Lab is driving key insights on pollution contributors to hasten predictions based on big data analytics. The model is helping in predicting the effects of water on the flow and dispersal of pollutants. Back in 2015 IBM announced an agreement with the Delhi Dialogue Commission to apply advanced technologies to support the Government of Delhi’s clean air action plan. The collaboration was indented to leverage IoT and machine learning combined with the analytical power of cognitive computing and statistical modeling to provide the Commission with insights and recommended actions to improve air quality and better protect the health of Delhi’s citizens. Though it offered much promise, India is in a dire need of adoption of better tools and techniques to improve it pollution level. By applying cognitive techniques, unstructured data can be analysed and together with real-time ongoing insights can provide lot of relevant details that could lead to precise predictions and effective management models.
|
Pollution in India has been worsening day by day. After the national capital was declared the most polluted city of the country, others may be lining up soon in this chaos created by humans. Thanks to urbanisation growing at a mercurial rate, pollution woes continue to grow in large proportions. Even though remedies are […]
|
["IT Services"]
|
["cognitive computing human capital"]
|
Abhishek Sharma
|
2018-02-09T05:51:52
|
2018
| 1,193
|
["Go", "API", "machine learning", "AI", "R", "Scala", "Git", "RAG", "analytics", "cognitive computing human capital", "predictive analytics"]
|
["AI", "machine learning", "analytics", "RAG", "predictive analytics", "R", "Go", "Scala", "Git", "API"]
|
https://analyticsindiamag.com/it-services/exploring-greener-side-big-data-rejuvenate-graying-environment/
| 4
| 10
| 1
| false
| false
| true
|
10,099,765
|
Black Mirror Feels Closer Than Ever
|
Fresh off the press is the celebrated TIME AI 100 list. Among the tech titans stands Charlie Brooker, the man who offered a different point of view on the technologies being built in Silicon Valley. The 52-year-old writer extraordinaire is often credited for his uncanny scripted Netflix series ‘Black Mirror’. Since its UK premiere in 2011, Brooker has consistently shattered the rose-tinted glasses through which we’ve long-adored tech — when Meta was Facebook and X was Twitter. He has made us question: What if all this machine circus isn’t a blessing, but a curse? Time and again since Brooker’s worst-case scenario lens hit the air, he has managed to correctly predict the future in horrible ways. The debut episode of the latest season glances at how people might have to contend with managing their digital alter egos. The phenomenon went on to become the figurehead of the Hollywood writers’ strike — sparked by the anxiety around the AI’s dear child ChatGPT taking away the writers’ livelihoods. The striking writers are grappling with another pressing issue: How do we regulate these AI-generated doppelgängers? Coming back to Black Mirror, Season 6, Episode 1 introduces us to Joan, a tech exec whose life becomes a biographical drama on a Netflix-ish platform named Streamberry, portrayed by none other than Salma Hayek. While the concept of a celebrity living in your shoes might sound enticing, it’s anything but that for Joan. Every day, she’s haunted by her own and reel-Hayek’s actions, exposing her daily shenanigans and regrettable choices. Things spiral downwards from there when she realises that the Hayek playing Joan on screen is an AI-generated replica of the actress who has sold the rights to use her face to the company behind the show-about-the-show. Joan-esque Hollywood Enter Soul Machines, a company that could transform this dystopia into reality. A 2021 report by The Verge revealed that this classic Black Mirror company, co-founded by Greg Cross, primarily creates harmless customer service avatars. Much like Hayek in Brooker’s Netflix universe, Soul Machines has digitised NBA and K-pop icons, according to their website. The Information has even reported that “many stars and agents are quietly taking meetings with AI companies to explore their options”. While the company opens up new avenues for monetising celebrity likenesses, it also exposes them to the risk of damaging their brand. Echoes of the futuristic past Very recently AI tools have started to mimic famous as well as historical figures and the parallel timeline in Black Mirror feels closer than ever. SAG-AFTRA, the union representing over 160,000 actors, warns that generative AI and technologies alike could leave “principal performers and background actors vulnerable to having most of their work replaced by digital replicas”. The wishful fantasy of hyper-personalised content tailored to individual tastes, generated by AI is not distant. Superstar Jennifer Lopez’s digital twin is already campaigning for cruise ads by mimicking her voice and appearance. The campaign boosted bookings at the same time stirring concerns about misuse. While JLo’s team has taken precautionary measures, society has been using deepfakes awfully. Often, Black Mirror’s storylines have seemed to foreshadow some of the darker developments in the Bay Area. Perhaps, in the not-so-distant future, there’ll be a Joan reading an article on AIM about an episode titled “Joan Is Awful”, only for it to become a scene “Joan Is Awful” — reflecting the world’s obsession with being digitally real.
|
And AI’s dear children are the reason.
|
["AI Features"]
|
[]
|
Tasmia Ansari
|
2023-09-10T13:00:00
|
2023
| 566
|
["Go", "ChatGPT", "AI", "ETL", "ML", "Git", "Ray", "Aim", "generative AI", "R"]
|
["AI", "ML", "generative AI", "ChatGPT", "Aim", "Ray", "R", "Go", "Git", "ETL"]
|
https://analyticsindiamag.com/ai-features/black-mirror-feels-closer-than-ever/
| 2
| 10
| 0
| false
| false
| false
|
67,305
|
Facial Recognition Tech Hits The Wall, AI Journalist Misfires, And More, In This Week’s Top AI News
|
This has been quite an eventful week for the tech giants as they took a stance against the ongoing events in the US and have made efforts to bring in new regulations for the use of facial recognition technology. Apart from this, there are other hits and misses of AI. Let’s take a look at all the top AI news of the week: Big Tech Puts Brakes On Facial Recognition Tech It all started with IBM CEO Arvind Krishna when he announced the company was getting out of the facial recognition business and called for reforms to advance racial justice and combat systemic racism. “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms,” Krishna wrote in the letter delivered to the members of Congress late Monday. IBM’s withdrawal was followed by Amazon’s announcement of implementing a one-year moratorium on police use of its technology Rekognition. “We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology,” wrote Amazon in their press release. Microsoft later followed IBM and Amazon in restricting sales of facial recognition to law enforcement and announced that banning police from using its facial recognition tech until federal regulation is in place. The company’s President Brad Smith announced that they have decided not to sell facial-recognition technology until there is a national law in place, grounded in human rights. The backlash against the use of facial recognition technology has been around for a while, and few places in the US even passed laws to restrict the use of facial recognition. However, the ongoing events in the US forced tech companies to take a stance. By holding back on their own technology, the companies believe that it would buy enough time for Congress to pass new laws. OpenAI Goes Commercial We're releasing an API for accessing new AI models developed by OpenAI. You can "program" the API in natural language with just a few examples of your task. See how companies are using the API today, or join our waitlist: https://t.co/SvTgaFuTzN pic.twitter.com/uoeeuqpDWR— OpenAI (@OpenAI) June 11, 2020 The efforts of OpenAI have started to come to fruition as it has announced that the public can access its innovations with new API, which has been launched this week. OpenAI stated that the users can now apply its API to any language task — semantic search, summarization, sentiment analysis, content generation, translation, and more — with only a few examples. The company stated that they will be offering free access to the API for the next two months of their private beta. Along with this beta API, OpenAI is also starting an academic access program to let researchers build and experiment with the API with free access. Wipro And IBM Collaborate To Accelerate Cloud Journey In Bengaluru Wipro and IBM announced this week that their Novus Lounge, located at Wipro’s Kodathi campus in Bengaluru will be offering a comprehensive suite of solutions leveraging Cloud, AI, machine learning and IoT to help developers and start-ups. Customers can now access to IBM and Red Hat solutions remotely to scale their technology. Additionally, Wipro will leverage IBM Cloud offerings alongside its own technology to develop solutions for clients in Banking and Financial Services, Energy and Utilities, Retail, Manufacturing and Healthcare space. Microsoft’s AI Reporter Misfires https://twitter.com/jimwaterson/status/1270236669137031169?s=20 Last week, Microsoft’s News service MSN laid-off contract-based news curators and replaced them with AI, sparking a debate on the fate of jobs in the face of automation. However, the decision of using AI backfired when one of the AI curated articles misrepresented an interviewee. As appeared first on The Guardian, it was reported that the early use of AI news reporter resulted in a story about the singer Jade Thirlwall’s personal reflections on racism being illustrated with a picture of her fellow band member Leigh-Anne Pinnock. “@MSN If you’re going to copy and paste articles from other accurate media outlets, you might want to make sure you’re using an image of the correct mixed-race member of the group,” wrote Thirlwall on social media. The AI curator even reported its own misfiring, setting a new precedent for self-criticism in the news circles. Adding to the irony, the article, however, was removed from the feed manually! Facebook’s DeepFake Detection Challenge Results Are Out Launched last year in December, the DeepFake detection competition drew more than 2,000 participants who submitted more than 35,000 models. Facebook has now shared the results of its competition. The top-performing model achieved 82.56 percent average precision on a public dataset. Whereas, on a blackbox dataset, the ranking of top-performing models changed significantly. The highest-performing entrant model achieved an average precision of 65.18 percent against the black box data set. The objective of this challenge was to increase the awareness of the impacts of fake imagery and to encourage the developers to create tools that can help prevent people from being deceived by the images and videos they see online. Flipkart’s New Voice Assistant Flipkart has rolled out a new AI-powered voice assistant feature called Supermart to make it easier for consumers to shop. Now customers can explore deals and offers, filter results, add multiple items to carts, receive contextual suggestions and check out using conversational voice commands. This new feature has already been rolled out for Android users. Supermart supports Hindi and English and is expected to support more Indian languages in future.
|
This has been quite an eventful week for the tech giants as they took a stance against the ongoing events in the US and have made efforts to bring in new regulations for the use of facial recognition technology. Apart from this, there are other hits and misses of AI. Let’s take a look at […]
|
["AI News"]
|
["big data processing interview", "face recognition online", "IBM", "Microsoft", "OpenAI"]
|
Ram Sagar
|
2020-06-13T17:00:35
|
2020
| 918
|
["Go", "semantic search", "machine learning", "OpenAI", "AI", "sentiment analysis", "AWS", "ML", "big data processing interview", "RAG", "IBM", "face recognition online", "R", "Microsoft"]
|
["AI", "machine learning", "ML", "OpenAI", "RAG", "sentiment analysis", "semantic search", "AWS", "R", "Go"]
|
https://analyticsindiamag.com/ai-news-updates/ai-latest-microsoft-amazon-openai/
| 3
| 10
| 2
| false
| true
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.