Featured

Artificial intelligence has come a long way in the last few years, making meaningful contributions to automating mundane and time-consuming tasks and generating insights for data-driven decision making.
But how AI is created has also matured considerably, says global AI and emerging technologies expert Dheeren Velu. This is allowing for easier and more responsible use of the technology.
“AI is being used to solve business problems, detect fraud, manage better supply chains, recommend better products, predict customer churn, and so on,” says Velu, the Melbourne-based Director of Applied Innovation at international IT services provider Capgemini.
“But I must say, it’s still sort of rough around the edges,” he adds.
After an initial wave of enterprise-scale AI applications, the potential and limitations of AI had become obvious and informed efforts to make AI easier to develop and less prone to bias.
Here are four key trends Velu is seeing driving the current phase of AI development in the business world.
Simplifying AI’s design process
Capgemini’s Dheeren Velu
Artificial intelligence, by nature, is a complex and specialised field. Building AI applications to date has involved the input of data scientists, machine learning engineers and even experts in computational linguistics, for language-based applications.
“This sort of need for deep skills in the AI world was sort of restricting the applications and innovations in AI,” says Velu, who addressed an IT Professionals NZ webinar last week.
It meant that the really powerful AI tools were restricted to the industries with the big budgets to throw at the development process, the financial services, utilities and manufacturing industries among them, as well as the Big Tech players who were early to embrace AI development.
“There has been a massive talent problem in the industry as well, with borders being closed,” Velu adds.
Now AI is seeing the automation wave that has already streamlined software development with “low-code” and “no-code” tools and platforms available to use.
“Tech companies are building tools to automate tasks performed by these skilled individuals, enabling almost anyone in the organization, perhaps a data analyst, or a business user, to build AI applications,” says Velu.
He points to CreateML, Apple’s tool for training machine learning models on Mac computers and iOS devices that don’t require technical expertise.
Google launched AutoML in 2018, introducing a series of machine learning tools that Google CEO Sundar Pichai said could help tackle skills shortages holding back AI development.
“We hope AutoML will take an ability that a few PhDs have today and will make it possible in three to five years for hundreds of thousands of developers to design new neural nets for their particular needs,” Pichai said at the time.
MakeML had emerged as a standalone platform offering object detection and segmentation machine learning models with a wide range of potential uses.
“We already see businesses using these platforms to turn their large, unstructured, unruly data sets into structured data so that they can train models,” says Velu.
“But they can also build and deploy models with very minimal skills.”
MakeML’s object detection and segmentation platform
Microsoft Power Automate and Amazon’s newer Honeycode platforms were making software development much more accessible and would increasingly facilitate aspects of AI development in the same way.
The rise of ML Ops
Most people involved in the software industry know the value of DevOps – the set of practices that combines software development and IT operations and gives structure to the complex processes involved. Now machine learning has its own set of practices, dubbed ML Ops.
“It’s basically a set of best practices,” says Velu. It includes workflows and automation tools to reduce complexity “so developers can focus on the problem that needs to be solved, and not worry about the intricacies of the development effort itself”.
ML Ops was serving not only to improve the efficiency of machine learning models but also to automate the process of detecting and weeding out bias in models.
“If a certain model is drifting, we have a bunch of tools in these platforms that address that,” says Velu.
This is one of the fastest areas of growth among AI-related projects on Github, says Velu, an indication of the importance developers were placing on creating trustworthy and reliable AI applications
Auditability and explainability of AI
Related to that is a growing focus on making AI less of a “black box”, allowing its outputs to be better understood, explained and audited.
“It’s becoming an extremely important area, particularly in industries that are highly regulated – financial services, healthcare, where decisions where decisions made by machine can be life-altering,” says Velu.
In the government space, some moves are being made to introduce oversight of AI-driven decision making. New Zealand last year introduced the Algorithm Charter for Aotearoa New Zealand, which 26 government departments and government agencies have now signed up for.
It’s a voluntary code, but requires that signatories “maintain transparency by clearly explaining how decisions are informed by algorithms”.
“Responsible AI is still in its infancy in many ways,” says Velu.
“But enterprise AI platforms have started to include auditability, and sort of basic data visualisation toolsets. I guess it would be hard to imagine wider adoption of AI in the enterprise without having explainability and auditability and responsible AI.”
AI moving to the edge
We’ve heard a lot in recent years about edge computing, where information is processed and stored on devices near the user, rather than everything being sent to the cloud to be processed and results delivered from afar.
The rise of IoT and 5G networks are enabling edge computers for low-latency real-time applications. The same is happening for machine learning and the use of AI applications.
“We’re sort of getting AI computations and AI decision making closer to the source of data generation, as opposed to the cloud,” says Velu.
Apple has made much of the fact that the chip powering its iPhone is capable of doing on-device AI processing, so less data needs to be shifted off the phone to another platform. The latest software update from Apple, iOS15 allows for voice commands to Apple’s Siri digital assistance to also be processed on the phone, improving privacy and allowing Siri to offer some functionality when it is offline.
AI processing at the edge means a factory or electricity network operator can use data from IoT sensors to produce insights in a localised setting, with no need to send data to the cloud or offshore. With billions of IoT devices now deployed, Velu sees edge-based AI becoming an important area of technology.
Is AI’s progress too slow?
Artificial intelligence has been hyped to such an extent that its real-world applications sometimes seem disappointingly limited in comparison to what we’ve been promised.
But Velu says the field is progressing rapidly in certain areas. He points to the breakthroughs in natural language processing from OpenAI. The Generative Pre-trained Transformer 3 is an autoregressive language model that uses deep learning to produce human-like text and is increasingly producing lifelike samples of text assembled from sources of data fed into its machine learning models.
“There are new techniques putting us on a new plane,” says Velu.
Spill-over benefits from AI development in one field were powering innovation in other areas. For instance, work on autonomous vehicles had yielded advances in computer vision which could be applied in different applications.
Velu admits that many people are still worried about AI as an existential threat to humans. He describes the two camps of thinking around the technology as The Terminator versus R2D2. Will AI come to dominate us and speed our demise or act as a useful assistant, as the loveable robot of the Star Wars movies did.
In the context of IQ or our ability to compute or ability to recall information, some of the machines can already outperform us,” says Velu.
“But if you look at intelligence more broadly, I guess we are we are very far from having a machine or a system that can effectively replicate all our intelligence.”
Microsoft
Microsoft is a technology company whose mission is to empower every person and every organisation on the planet to achieve more. We strive to create local opportunity, growth, and impact in every country around the world.
Data, AI, BI & ML
Artificial Intelligence and Machine Learning are the terms of computer science. Artificial Intelligence : The word Artificial Intelligence comprises of two words “Artificial” and “Intelligence”. Artificial refers to something which is made by human or non natural thing and Intelligence means ability to understand or think. There is a misconception that Artificial Intelligence is a system, but it is not a system. AI is implemented in the system. There can be so many definition of AI, one definition can be “It is the study of how to train the computers so that computers can do things which at present human can do better.” Therefore it is an intelligence where we want to add all the capabilities to machine that human contain. Machine Learning : Machine Learning is the learning in which machine can learn by its own without being explicitly programmed. It is an application of AI that provide system the ability to automatically learn and improve from experience. Here we can generate a program by integrating input and output of that program. One of the simple definition of the Machine Learning is “Machine Learning is said to learn from experience E w.r.t some class of task T and a performance measure P if learners performance at the task in the class as measured by P improves with experiences.”