Artificial Intelligence (AI)

Artificial Intelligence is a phrase that we have heard a lot more about in recent years due to breakthroughs in AI technology, but it’s also a bit misunderstood thanks to preconceived notions from decades of use in pop culture. With that in mind, let’s go over how AI came to be, review a few of the popular AI applications, and cover some of the concerns about it.

Artificial intelligence as a concept began around the mid-twentieth century as pop culture figures like the Tin Man from Wizard of Oz brought the idea of autonomous machines to the forefront in science. Mathematician Alan Turing was one of the early thought leaders on the concept of whether a machine like a computer could not only solve problems, but also make decisions. In other words, artificial intelligence. Unfortunately for Turing, technology hadn’t caught up to where it needed to be to truly explore and realize the potential of AI. As technology progressed over time, specifically computers and computing power, the interest in AI within the scientific community increased.

In the 1980s, AI research made a breakthrough with a new technique called “deep learning.” Deep learning is a process in which a computer can learn by gaining new information through experience, but limitations in computing power and memory at the time limited its impact. Thanks to Moore’s Law, it turned out to be more of a waiting game than a dead end.

Moore’s Law is named after Gordon Moore, co-founder of computer chipmaker Intel. In a paper Moore wrote in 1965 he predicted that the number of transistors in an integrated circuit would double every year for the next ten years. Ten years later he updated his prediction to the number of components doubling every two years. As you double the number of transistors, you also double the processing power. Moore’s predictions have held up for almost fifty years now, and the continuing increase in processing power has allowed AI research to make larger leaps in progress. Thanks to the exponential growth in computer speeds and memory storage, new applications for Artificial Intelligence have led to many of the new technologies we’ve seen in recent years.

For example, one form of AI that we are all familiar with are virtual assistants. The most commonly known and used assistants are Siri (Apple), Alexa (Amazon), and Google Assistant. These virtual assistants are generally chatbots that can interpret information input by a user and assist the user with tasks such as looking up information, scheduling events in a calendar, connecting and interacting with IoT (Internet of Things) devices, sending messages, and other tasks. The capabilities of virtual assistants are continually expanding as AI technology advances.

ChatGPT is a relatively recent addition in the AI world and can be a bit misunderstood in its capabilities. The GPT stands for “generative pre-trained transformers,” which means that the program is fed information to learn, before ever being prompted by a user. While ChatGPT is a chatbot, it is also able to learn and adapt to a user’s direction based on this pre-trained information. For instance, a user can ask it to write a memo, an email, a marketing brochure, a story or a script by specifying what details to include. The results can be quite remarkable, although with varying degrees of success based on the prompts from the user. The capabilities of ChatGPT were the biggest factor leading to the recent AI boom, but it has also prompted some ethical concerns, with some critics seeing its use of existing content as plagiarism. ChatGPT’s developer, Open AI, has stated that it is addressing this issue. There is also a concern that its use will take away jobs from those who make a living as writers.

After the emergence of ChatGPT, AI applications started to be developed to address other areas besides the written word, such as artwork. Similar to the text-based world of ChatGPT, generative AI art takes a text prompt from a user to create an image based on the user’s specifications. It starts with a large database of images that are associated with corresponding text descriptions. The user prompts the system to create a unique picture and it uses the existing images and descriptions to create what the user asked for. This use of AI has also come under fire for similar reasons to AI text generators—the images that are in the AI database and pre-trained into the AI are created from existing artwork or photographs, and can be seen by those content creators as plagiarism. And, like writing projects generated by AI, there are concerns that AI generated images will take the place of graphic artists.

Outside of writing assistants and artwork generation, AI has found applications in other fields such as medicine. Research, testing and processing medical data and images that once took months can be done in hours, allowing for quicker turnaround of results and more accurate diagnoses. This dramatic increase in efficiency can help medical professionals in their work and the quality of life for patients.
Farming and agriculture have also found uses for artificial intelligence. Tasks like crop growth time prediction and identification of problem areas in crops can add to efficiency and yield. It can also be used in automating farming equipment and greenhouses.

The fear of AI isn’t unfounded, but may not be to the level we see in some sci-fi films. Yes, while AI has been embraced by some as a way to save on costs at the expense of human workers, it is also still limited in many ways. Generative AI, for example, relies heavily on pre-trained information that was provided by humans and can only work with that pre-trained information. This also makes it susceptible to the mistakes and the biases of those training the systems, which can sometimes lead to incorrect and unexpected results.

AI is not yet at the point we see in books and films, where it has a sentient and independent mind like ours. It is more like a language model program that can search and match up relevant existing information, and with some good prompting can create unique narratives or images. It is fast and efficient and can be very useful, but it is not yet able to truly replace a human mind.