Artificial Intelligence

An intro to Artificial Intelligence

Picture Courtesy: Piqsels
Picture Courtesy: Piqsels

Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making—and already it is transforming every walk of life. In this report, we discuss AI applications across various sectors, address issues in its development, and offer recommendations for getting the most out of AI while still protecting important human values. There are three types of AI, (1) Artificial narrow intelligence (ANI), or weak AI, which is a type of artificial intelligence that can only focus on one specific task or problem at a time. Currently, this is our widely-understood definition of artificial intelligence as a whole. Narrow AI is programmed to complete a single job, such as telling the weather or playing a game. (2) Artificial general intelligence (AGI), or strong AI, is the inverse of ANI. AGI refers to machines that can successfully perform human tasks. This type of intelligence is considered human-like, given that general AI can strategize, reason, learn and communicate in a manner aligned with human functions and processes. Also, some AGI machines can see (using computer vision) or manipulate objects. (3) A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds.

Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were all asked about AI, only 17 percent said they were familiar with it. A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI would benefit their organizations exactly.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision-making. I hope to explain how AI is altering the world and raising important questions for society, the economy, and governance.

Artificial Intelligence is being used to the following:

  • National security
  • Health care
  • Criminal justice
  • Military Defense
  • Entertainment
  • Robotics 
  • Achieving smart cities
  • Generative Pre-trained Transformer 3 (GPT-3)
  • E-Commerce
  • Computer Vision
  • Travel & Transport
  • Autonomous Vehicles
  • Astronomy
  • Agriculture
  • Social Media
  • Education
  • Intelligent Cybersecurity

And many more.

To maximize AI benefits, follow these steps:

  • Encourage greater data access for researchers without compromising user privacy.
  • Invest more government funding in unclassified AI research.
  • To promote new ideas and designs of digital education and AI workforce development where employees have the skills needed in the 21st-century economy.
  • Create a federal AI advisory committee to make policy recommendations.
  • Engage with state and local officials hence they can enact effective policies.
  • Regulate broad AI principles rather than specific algorithms.
  • Take bias complaints seriously that AI does not replicate history injustice, unfairness, or discrimination in data or algorithms.
  • Maintain mechanisms for human oversight and control.
  • Penalize malicious AI behavior and promote cybersecurity.

In a nutshell, Artificial Intelligence is a machine software that responds to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention. According to researchers, these software systems make decisions that usually require a human level of expertise and help people anticipate problems or deal with issues as they come up. As such, they operate in an intentional, intelligent, and adaptive manner.

Here are some leading universities that pursue Artificial Intelligent as a work of study:

North America:

  • Carnegie Mellon University (Pittsburgh, PA)
  • Stanford University (Standford, CA)
  • Massachusetts Institute of Technology (Cambridge, MA)
  • Harvard University (Cambridge, MA)
  • Yale University (New Haven, CT)
  • University of California (Berkeley)
  • University of Washington (Seattle)
  • Columbia University (New York)
  • Georgia Institute of Technology (Atlanta)
  • University of Texas (Austin)


  • University of Oxford (England)
  • ETH Zürich (Switzerland)
  • University of Cambridge (England)
  • EPFL (Switzerland)
  • Imperial College London (England)
  • University College London (England)
  • The University of Edinburgh (Scotland)
  • Technical University in Munich (Germany)
  • Jacobs University Bremen (Germany)
  • Berlin Institute of Technology (Germany)
  • KTH Royal Institute of Technology (Sweden)
  • Delft University of Technology (Netherlands)
  • University of Amsterdam (Netherlands)
  • University of Birmingham (England)
  • King’s College London (England)
  • Institute of Cyber Intelligence Systems (Russia)
  • Innopolis University (Russia)
  • Oslo University (Norway)
  • NTNU (Norway)
  • Coventry University (England)


  • The Mohamed bin Zayed University of Artificial Intelligence (UAE)
  • Asian Institute of Technology (Thailand)
  • Tsinghua University (China)
  • Rikkyo University (Japan)
  • Osaka University (Japan)
  • Tokyo Institute of Technology (Japan)
  • Khon Kaen University (Thailand)
  • Nanyang Technological University (Singapore)
  • Singapore University of Technology &Design (Singapore)
  • IIIT Hyderabad (India)
  • IIT Bombay (India)
  • IIT Kharagpur (India)
  • IIT Delhi (India)


  • Deakin University (Geelong)
  • The University of Melbourne (Melbourne)
  • The Australian National University (Canberra)
  • Monash University (Melbourne)
  • University of Wollongong (Wollongong)

And many more universities in different regions of the globe.

 Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are unlike passive machines that are capable only of mechanical or predetermined responses. Using sensors, digital data, or remote inputs, they combine information from various sources, analyze the material instantly, and act on the insights derived from those data. With massive improvements in storage systems, processing speeds, and analytic techniques, and are capable of tremendous sophistication in analysis and decision-making.

GPT-3 is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2). GPT-3, which was introduced in May 2020, and was in beta testing as of July 2020, is part of a trend in natural language processing (NLP) systems of pre-trained language representations. GPT-n series created by OpenAI, founded by Elon Musk, Sam Altman, and others in San Francisco in late 2015.

No matter what, Brain hacking is exceptionally hard, Elon Musk said at the Neuralink presentation on September 9, 2020.

A question, if thoughts, feelings, and other mental activities are nothing more than electrochemical signals flowing around a vast network of brain cells, will be connecting these signals with digital electronics allow us to enhance the abilities of our brains? Neuroscientists have been listening to brain cells in awake animals since the 1950s. And now, a monkey's brain signals are being used to control an artificial arm. And in 2006, the BrainGate team began implanting arrays of 100 electrodes in the brains of paralyzed people, enabling base control of computer cursors and assistive devices. This approach works adequately for simple movements, but can it ever generalize to a more complex mental process? Some researchers hope that AI can do it. Perhaps given enough data, AI could learn to understand the signals from any brain. However, unlike thoughts, language evolved for communication with others, so different speakers share general rules such as grammar and syntax. When it comes to influencing, rather than reading, the brain, the challenges are still vague. According to Musk, the Food and Drug Administration (FDA) in July approved Neuralink breakthrough device testing.

Electrical stimulation activates many cells around each electrode, as was nicely shown in the Neuralink presentation. But cells with different roles are mixed, so it is hard to produce a meaningful experience. 

That said, decades of research have shown that the brain does not yield its secrets easily and is likely to resist our attempts at mind hacking for some decades yet.

Publish : 2021-03-25 14:53:00

Give Your Comments