Nowadays, artificial intelligence is one of the most popular terms in the world. According to some forecasts, AI will have brought into the world economy $15.7 trillion. It already creates a bunch of digital content — texts, pictures, music, video etc. The basis of its operation is worth knowing, because your business or job might count on it.
What is artificial intelligence?
Artificial intelligence (AI) is a method to make your computer or software ‘think’ like a human brain. It is reached through studying the patterns of human brain work and cognitive functions analysis. This research resulted in development of intelligent software and systems.
If you ever used a fingerprints’ scanner, Face ID in your phone, typed using T9, talked to a chatbot in any online shop — you have connected with artificial intelligence. More AI examples include ChatGPT, Siri or Alexa voice assistants, Smart House systems, car autopilots etc.
History of artificial intelligence development
Rise (1952-1956)
- 1955 — Allen Newell and Herbert Simon create the first artificial intelligence program, Logic Theorist. It has proven 38 of 52 mathematical theorems, as well as found new proofs for some others.
- 1956 — ‘artificial intelligence’ is a term, first used by John McCarthy during the Dartmouth Conference.
Golden Years (1956-1974)
- 1966 — Joseph Weizenbaum created the first chatbot, ELIZA. It was named after Eliza Doolittle, the protagonist of the Pygmalion play by Bernard Shaw, who was taught the language of the ‘upper-class’. ELIZA imitated a dialogue with a psychotherapist.
- 1972 — the first intelligent human-like robot was created in Japan, which was named WABOT-1.
The first Artificial Intelligence Winter (1974-1980)
At that time, computer scientists faced an acute budgeting shortage.
AI Boom (1980-1987)
In 1980, the first National Conference on Artificial Intelligence was held at Stanford University.
The second Artificial Intelligence Winter (1987-1993)
Once again, investors and governments stopped financing artificial intelligence research due to the high cost and inefficient results.
The intellectual agents’ emergence (1993-2011)
- 1997 — IBM Deep Blue computer won against the chess world champion Garry Kasparov.
- 2002 — AI first entered houses as a Roomba vacuum cleaner.
- 2006 — business started using this technology, namely Facebook, Twitter and Netflix.
Deep learning, Big Data and artificial intelligence (2011 – till now)
- 2011 — Watson by IBM won a Jeopardy quiz where it had to solve complex tasks. It became evident that it understands the natural language, quickly tackles difficult issues.
- 2012 — Google rolled out a Google Now feature for its Android app, where it could provide a customer with information as a forecast.
- 2020 — Baidu launched a LinearFold AI algorithm for medical and clinical research teams developing a SARS-CoV-2 vaccine.
The algorithm can predict an RNA sequencing in just 27 seconds. It is 120 times faster than other methods.
AI components
The artificial intelligence systems unite large data volumes with intellectual iterative processing algorithms. Such a combination allows for studying on the basis of templates and analyzed data specifics. Each time a system does a full data processing cycle, tests it and estimates its performance, it uses the outcomes to develop extra expertise.
AI does not exist without:
- Machine learning (ML). It gives AI a possibility to learn. It works with algorithms which spot patterns and generate insights based on information they come across.
- Deep learning. This machine learning sub-category allows AI to imitate human brain neural networks. It discerns trends, noises, and resources of data mix-ups.
- Neural networks. Deep learning is often possible thanks to artificial neural networks imitating neurons or brain cells. The models utilize math and computer science principles to simulate human brain processes, which allows for a more generic learning. The neural networks are comprised of three layers: input, hidden, output. They include thousands or millions of nodes. Information is brought into the input layer. The incoming data has some weight, and the united nodes multiply the connection size as they move.
Types of Artificial Intelligence
- Purely reactive. These machines lack memory or data for operation, specializing in just one area of activity. For example, in a chess game, the machine observes the moves and makes the best decision to win.
- Limited memory. These systems collect past data and continue adding it to their memory. They have enough memory or experience to make the right decisions, though their memory is minimal. For instance, such a machine can suggest a restaurant based on the gathered information about a person’s location.
- Theory of mind. This type of AI understands thoughts and emotions and can socially interact.
- Self-aware. Self-aware machines are the future generation of new technologies. They will be intelligent, sensitive, and conscious.
Where it is used
Nowadays, the technology is used in many domains, including transport, production, finances, healthcare, education, manufacture etc.
For example, systems like Google Maps could analyze the transport movement at any given time, including the traffic incidents reporting on spot: construction works or accidents.
The forecasting and preventing systems within the manufacturing sphere help producers avoid costly downtime, and the incorporation of AI inside the quality control tools raises the production efficiency.
Machine learning helps financial organizations detect fraud. AI and ML also play a certain role in payment processing, mobile check deposits, insurance, and investment recommendations.
In healthcare, AI is transforming how people interact with doctors. It helps diagnose conditions faster and more accurately, speed up drug discovery, and monitor patients via virtual nurse assistants.
Ukrainian startup Esper Bionics developed the Esper Hand, an AI-based bionic prosthesis. It performs all the functions of a real hand, while a person using it can engage in sports, household tasks, work on a computer, use a phone, and more.
AI in education will change how people of all ages learn. The use of artificial intelligence for machine learning, natural language processing, and facial recognition helps digitize textbooks, detect plagiarism, and track students’ emotions to determine who is struggling or bored.
The work of AI requires significant computing power. Without cloud technologies, it would not be possible.
The future of Artificial Intelligence
From the very beginning, AI has been under the watchful eye of scientists and the public. One common topic is that machines will become highly advanced, and humans will not be able to keep up, leading to machines evolving on their own.
Another concern is that machines may interfere with people’s private lives and even be used as weapons. Other arguments focus on the ethics of AI and whether intellectual systems should be given the same rights as humans.
One more controversial issue is the potential impact of AI on human employment. As many industries seek to automate certain types of work with smart machines, there are concerns that 300 million people could be laid off from the job market. Self-driving cars could eliminate the need for taxis and car-sharing programs, and manufacturers could easily replace human labor with robots. Yet, technology should not be viewed as a threat. Over the centuries, various professions have disappeared, but new ones have always popped up.
AI may also affect climate change and the environment. Ideally, by using advanced sensors, cities could become less congested, less polluted, and generally more livable.