More

    What is artificial intelligence (AI)?

    Artificial intelligence harnesses the power of computers and machines to replicate the problem-solving and decision-making abilities observed in the human mind.

    What is artificial intelligence?

    Though numerous definitions of artificial intelligence (AI) have emerged over the past few decades, John McCarthy, in a 2004 paper, defines it as “the science and engineering of making intelligent machines, especially intelligent computer programs.” This definition encompasses the development of machines with problem-solving and decision-making capabilities, not limited to methods observable in biological systems.

    The origin of the AI discourse can be traced back to Alan Turing’s 1950 work, “Computing Machinery and Intelligence,” where he poses the fundamental question, “Can machines think?” Turing introduces the renowned “Turing Test,” wherein a human interrogator attempts to distinguish between computer and human text responses. This test remains a significant aspect of AI history and an ongoing philosophical concept related to linguistics.

    Stuart Russell and Peter Norvig’s influential textbook, “Artificial Intelligence: A Modern Approach,” outlines four potential goals or definitions of AI based on rationality and thinking versus acting. These categories include thinking and acting like humans or thinking and acting rationally. Turing’s definition aligns with “systems that act like humans.”

    At its core, artificial intelligence integrates computer science and robust datasets to facilitate problem-solving. Sub-fields like machine learning and deep learning, often mentioned alongside AI, involve algorithms that create expert systems for predictions or classifications based on input data.

    While AI has experienced cycles of hype, the release of OpenAI’s ChatGPT signals a notable turning point. Unlike previous breakthroughs in computer vision, the current leap forward is in natural language processing. Generative AI models extend beyond language to learn the grammar of software code, molecules, natural images, and various other data types.

    The expanding applications of AI prompt critical conversations about ethics, especially as its integration into business gains momentum. As discussions around the ethical implications of AI grow, IBM positions itself within this discourse. To delve into IBM’s stance on AI ethics, further details are available here.

    artificial intelligence

    Types of artificial intelligence—weak AI vs. strong AI

    Weak AI, also referred to as Narrow AI or Artificial Narrow Intelligence (ANI), is designed and trained to perform specific tasks. Despite its name, Weak AI is anything but weak, powering many robust applications prevalent today, including Apple’s Siri, Amazon’s Alexa, IBM Watson, and autonomous vehicles.

    On the other hand, Strong AI encompasses both Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial General Intelligence (AGI), also known as general AI, is a theoretical concept where a machine possesses intelligence equivalent to humans. It would exhibit self-aware consciousness, problem-solving abilities, learning capabilities, and future planning. Artificial Super Intelligence (ASI), or superintelligence, goes beyond the intellectual capacity of the human brain. While strong AI remains purely theoretical with no practical implementations currently, ongoing research continues to explore its development. For now, depictions of ASI in science fiction, such as HAL, the superhuman and rogue computer assistant in 2001: A Space Odyssey, provide some of the most imaginative examples.

    Deep learning vs. machine learning

    What is artificial intelligence?

    While various definitions of artificial intelligence (AI) have emerged over the last few decades, John McCarthy provides the following definition in his 2004 paper: “It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”

    However, the origin of the artificial intelligence conversation can be traced back decades before this definition to Alan Turing’s seminal work, “Computing Machinery and Intelligence,” published in 1950. Turing, often referred to as the “father of computer science,” poses the fundamental question, “Can machines think?” He introduces the now-famous “Turing Test,” where a human interrogator attempts to distinguish between a computer and human text response. Despite scrutiny, this test remains integral to AI’s history and a continued concept in philosophy, incorporating linguistic ideas.

    Stuart Russell and Peter Norvig contribute significantly to AI literature with “Artificial Intelligence: A Modern Approach,” a leading textbook that explores four potential goals or definitions of AI based on rationality and thinking versus acting:

    Human approach:

    1. Systems that think like humans
    2. Systems that act like humans

    Ideal approach:

    1. Systems that think rationally
    2. Systems that act rationally

    Alan Turing’s definition aligns with “systems that act like humans.”

    At its simplest form, artificial intelligence is a field combining computer science and robust datasets to facilitate problem-solving. It encompasses sub-fields of machine learning and deep learning, frequently mentioned in conjunction with artificial intelligence. These disciplines feature AI algorithms creating expert systems for predictions or classifications based on input data.

    Over the years, artificial intelligence has experienced cycles of hype, and the release of OpenAI’s ChatGPT appears to mark a turning point. While previous breakthroughs focused on computer vision, generative AI’s leap forward is in natural language processing. Generative models extend beyond language, learning the grammar of software code, molecules, natural images, and various other data types.

    Weak AI, also called Narrow AI or Artificial Narrow Intelligence (ANI), is designed to perform specific tasks. It drives most AI applications today, powering robust applications like Apple’s Siri, Amazon’s Alexa, IBM Watson, and autonomous vehicles.

    Strong AI comprises Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). AGI, or general AI, is a theoretical form where a machine’s intelligence equals humans, possessing self-aware consciousness and the ability to solve problems, learn, and plan. ASI, or superintelligence, surpasses human intelligence and capability. While strong AI remains theoretical with no practical examples, ongoing research explores its development. Science fiction, such as HAL in 2001: A Space Odyssey, provides imaginative examples of ASI.

    artificial intelligence

    Deep learning vs. machine learning

    Deep learning and machine learning, often used interchangeably, have nuanced differences. Both are sub-fields of artificial intelligence, with deep learning as a subset of machine learning.

    Deep learning consists of neural networks, and a “deep” network has more than three layers, including inputs and outputs. The distinction lies in how each algorithm learns. Deep learning automates much of the feature extraction process, reducing manual intervention and enabling the use of larger datasets. It’s scalable machine learning. Classical machine learning depends more on human intervention to learn, requiring structured data.

    Deep machine learning can leverage labeled datasets (supervised learning) or unstructured data in its raw form, determining the hierarchy of features without human intervention. It scales machine learning in innovative ways.

    Top 20 Careers in Artificial Intelligence: A Comprehensive Guide

    The Emergence of Generative Models

    Generative AI involves deep-learning models capable of interpreting raw data, such as entire datasets like Wikipedia or the complete works of Rembrandt. These models “learn” to generate statistically probable outputs when given prompts. In essence, generative models encapsulate a simplified representation of their training data and utilize it to produce a new creation resembling the original data, albeit not identical.

    While generative models have been utilized for years in statistical analysis of numerical data, the advent of deep learning extended their application to more complex data types like images and speech. Variational autoencoders (VAEs), introduced in 2013, were pivotal in this evolution, being among the first deep-learning models widely employed for generating realistic images and speech.

    According to Akash Srivastava, a generative AI expert at the MIT-IBM Watson AI Lab, “VAEs opened the floodgates to deep generative modeling by making models easier to scale.” The early successes of models like GPT-3, BERT, or DALL-E 2 showcased the possibilities of generative AI. The future envisions models trained on extensive, unlabeled datasets applicable to various tasks with minimal fine-tuning. This shift signifies a move from domain-specific systems to broader AI capable of learning more universally across domains and problems.

    Generative AI’s impact on enterprise adoption is anticipated to be significant, particularly with the rise of foundation models. These models, trained on large, unlabeled datasets and fine-tuned for diverse applications, are expected to accelerate AI adoption by reducing labeling requirements. This reduction makes it more accessible for businesses to integrate AI, enabling highly accurate and efficient automation in various mission-critical scenarios. IBM aspires to extend the power of foundation models to every enterprise seamlessly within a hybrid-cloud environment.

    Practical Applications of Artificial Intelligence

    Artificial intelligence (AI) finds diverse applications in real-world scenarios. Here are some prevalent use cases:

    1. Speech Recognition:
    – Also known as automatic speech recognition (ASR) or speech-to-text, this capability employs natural language processing (NLP) to convert human speech into written form. Mobile devices often integrate speech recognition for voice search, such as Siri, enhancing accessibility in text messaging.

    2. Customer Service:
    – Online virtual agents are replacing human counterparts throughout the customer journey. They address frequently asked questions (FAQs) and offer personalized advice, altering the landscape of customer engagement on websites and social media. Examples include messaging bots on e-commerce sites, virtual agents in messaging apps like Slack and Facebook Messenger, and tasks typically handled by virtual and voice assistants.

    3. Computer Vision:
    – This AI technology allows computers to extract meaningful information from digital images, videos, and visual inputs. Unlike image recognition, computer vision can take actions based on these inputs. Powered by convolutional neural networks, it finds applications in photo tagging on social media, radiology imaging in healthcare, and the development of self-driving cars in the automotive industry.

    4. Recommendation Engines:
    – AI algorithms leverage past consumption behavior data to identify trends for effective cross-selling strategies. Online retailers use this information to provide relevant add-on recommendations to customers during the checkout process.

    5. Automated Stock Trading:
    – AI-driven high-frequency trading platforms optimize stock portfolios by executing thousands or even millions of trades per day without human intervention. This application enhances the efficiency and responsiveness of stock trading strategies.

    These practical applications showcase the versatility of AI across different domains, transforming the way tasks are performed and information is processed.

    Artificial Intelligence: Milestones in History

    The concept of ‘a machine that thinks’ traces back to ancient Greece, but the evolution of artificial intelligence (AI) gained significant momentum with the advent of electronic computing. Key events and milestones include:

    1950: Alan Turing publishes “Computing Machinery and Intelligence,” posing the question of whether machines can think and introducing the Turing Test to evaluate machine intelligence compared to humans.

    1956: John McCarthy coins the term ‘artificial intelligence’ at Dartmouth College’s first-ever AI conference. The Logic Theorist, the inaugural AI software, is created by Allen Newell, J.C. Shaw, and Herbert Simon later that year.

    1967: Frank Rosenblatt develops the Mark 1 Perceptron, the first neural network-based computer that learns through trial and error. Marvin Minsky and Seymour Papert’s book, “Perceptrons,” influences neural network research.

    1980s: Neural networks utilizing backpropagation algorithms become prevalent in AI applications.

    1997: IBM’s Deep Blue defeats world chess champion Garry Kasparov in a historic chess match.

    2011: IBM Watson triumphs over Jeopardy! champions Ken Jennings and Brad Rutter.

    2015: Baidu’s Minwa supercomputer employs a convolutional neural network to surpass human accuracy in image identification.

    2016: DeepMind’s AlphaGo, backed by a deep neural network, defeats world Go champion Lee Sodol, demonstrating AI’s prowess in complex games. Google acquires DeepMind for $400 million.

    2023: The surge in Large Language Models (LLMs), exemplified by ChatGPT, transforms AI performance and its potential for driving enterprise value. Generative AI practices allow deep-learning models to be pre-trained on vast amounts of raw, unlabeled data, marking a significant shift in AI capabilities.

     

    Recent Articles

    Related Stories

    Leave A Reply

    Please enter your comment!
    Please enter your name here