Jazyk:

ARTIFICIAL INTELLIGENCE

Artificial intelligence (AI) is a technology that allows computers to simulate human thinking and learning. Computers can then perform tasks that would under normal circumstances require human thought.

Artificial intelligence influences what news we listen to, what we follow on social media, and what we buy from online stores. But how does it work and what is it?

Imagine you need to retouch a photo, but AI tools are not yet available.

A photographer or graphic designer (that is, someone intelligent in the true sense of the word) has to use various tools and filters in complex graphics software to achieve the desired results. This work requires expertise, time and money.

AI applications can make the same adjustments almost instantaneously, at the touch of a button. We don’t have to feed the app complicated instructions or programming – the app is “intelligent” and understands what we want it to do. This is why we talk about artificial intelligence. Anyone can edit the photos, and all they need is a smartphone and a free downloadable application.

 

The AI revolution is coming in the way computers are controlled – very often there will no longer be a need for a programmer to create the program/software. We will simply describe our requirements to the computer and it will do what we need it to.

 

Fun fact: Uses of AI in everyday life:

 

Recognition – of images, sound, faces and other signals (e.g. searching a photo library), unlocking a smartphone or identifying people in a crowd

Recommendation algorithms – can recommend topics/products that users might be interested in. This technology is used in many online stores, apps and social networks

Computer security – SPAM, firewalls

Finance and banking – fraud detection, credit screening, automated stock trading

Medicine – diagnostics

Communication – chatbots, automatic translators from foreign languages (DeepL, Chat GPT, Google Bard etc.).

 

Fun fact: All AI today is “weak” – it is not able to think for itself, it just performs tasks. The “strong” AI which thinks for itself that we know from sci-fi movies does not exist.

How does artificial intelligence learn?

Thanks to artificial intelligence, computers can learn to perform more complicated tasks without human help. First, they have to train on the examples we give them – the more examples, the better. The training process is called machine learning (ML). A human only needs to provide the right examples to learn from and the machine will derive the exact algorithm itself.

Machine Learning

 

We train the computer on a database of 24 images. First, we upload 12 images of cows and tell it that this is a cow. Next, we do the same with 12 pictures of sheep. The computer will then be able to recognize with some probability in the other pictures whether it is a cow or a sheep. The more different pictures we show it during training, the better it will be able to recognize the animal correctly.

Similarly, artificial intelligence can learn to understand the way humans communicate. It only needs to “study” a sufficient number of human conversations.

Fun Fact: Machine learning is not perfect. A small child can recognize a cow even in a picture that is upside down.  A machine learning algorithm cannot, unless you also prepare the data in advance with the original picture in all possible rotations. The ability to think abstractly is something that even the latest versions of AI cannot do.

 

Fun fact: A certain milestone in AI was reached in 1997, when the Deep Blue computer beat Garry Kasparov at chess. The computer was not intelligent, it was just able to test more possible moves at once than a human. It didn’t understand how to play chess, it was just programmed to do so.

Simple AI systems following precise instructions, like Deep Blue, are still used because they are sufficient for some tasks.

 

Fun Fact: Machine learning can be done without examples. This is known as learning without a teacher. Google’s Deep Mind taught itself to play the game GO (a simple game with a large number of possible moves). In the beginning it was given only the basic rules of how to play GO, before it started playing against itself. It played billions of games in a few months (something you couldn’t do in a lifetime) and then beat the Korean grandmaster Lee Sedole, who was world number one at the time.

Thanks to this principle, AI can see relationships in the data that it has never been shown. This has huge potential, for example in health data evaluation. For instance, AI can evaluate some medical images better than the average doctor.

AI RECOGNITION FUNCTIONS

Applications using artificial intelligence for object recognition have been trained on huge databases. They have a lot of useful features. For example, in our parking lot, AI can use the camera to read your car’s license plate number and automatically raise the barrier if you have paid.

Image recognition

The artificial intelligence in this application trained on a database of millions and millions of drawings it collected from people from a simple game made by Google. The name of the game is Quickdraw, and you can still play it to add more images to the database.

 

Facial recognition

In order for AI to be able to tell your age, gender and mood, it had to be trained on a large number of people. It has seen many faces in conjunction with the necessary details of their owners.

Fun fact: 

The system recognizes faces on video in real-time. A face is characterized by a unique numerical vector, which is then compared to a database of similarly processed reference facial images. The numerical vector matching is so fast that the system can find a matching face image almost immediately, and is optimal for searching databases of people.

In practice, the system can, for example, count people, collect statistics on purchases, track the wearing of masks during Covid restrictions, or recognize individuals from a database of wanted persons.

 

Generative Artificial Intelligence

Another type of artificial intelligence is applications that can create (generate) new content. This is why they are called GENERATIVE. They can generate images, text, and even computer code.

Instructions for generative AI are given in the form of short prompts. These are short messages that tell the AI what to do.

A cleverly delivered prompt determines how well the generated content meets your expectations. The idea and the input always come from a human.

Fun Fact: When you use the word “please” in a prompt, the generated content will usually be of higher quality.

AI gallery

The screen is preloaded with several prompts that are constantly changing. Use any combination of these to create an original work of art using AI.

The artwork on the screen was created using the AI gallery. Who do you think is the artist?

Who is the author?

The author of the work is you, because you defined how the painting will look. You just used AI technology to create it instead of a paintbrush and paint.

 

 

 

 

Chatbot Holly

 

A chatbot is short for chatting robot. Chatbots use natural language processing techniques to interact with users, either verbally or in writing.  You may commonly encounter them as a voice automaton in a phone call to a large company or as an automated query window on the website of a major online store.

Chatbots today can also have more sophisticated and meaningful conversations, thanks to advances in so-called language models. A language model connects human language and machine-readable data by converting all text into numbers, which are then statistically evaluated. It arranges words in order of how they would most likely go together to answer your question. It doesn’t understand the text itself at all. One of the best known big language models is the GPT (Generative Pre-trained Transformer). This language model is also used by the most famous chatbot, ChatGPT.

ChatGPT looks like a simple chat application, but it’s not a human on the other end, but an advanced language model. Questions are asked in the form of prompts and ChatGPT answers them. These answers can be responded to, and gradually a communication thread is created, to the context of which ChatGPT adapts further answers.

Our Holly is also connected to ChatGPT. She uses the camera to see when you have come up to her. She converts your request to text and puts it into ChatGPT. She will read the response back to you.

The whole process takes a while, so please be patient!

Task:

Give Holly a prompt to create an epigram (a short humorous satirical poem) about your favorite animal. Speak slowly and clearly and don’t forget to ask Holly nicely!

For example: “Please give me an epigram about a giraffe.”

How did you like the poem? Did it have any flaws? If you feel like it, try asking Holly to compose a Haiku, write a short story containing 3 words you made up, etc. The possibilities are endless!

 

If an AI doesn’t know something, it will make up something very plausible. But its results need to be verified every time.

Fun fact:

ChatGPT has trained on approximately 9,000 billion words. At 250 words per minute, you would need 68,493 years to read them all.

 

Is AI a threat?

Motto: “Artificial intelligence won’t put you out of a job, but someone using it might.”

Artificial intelligence is a useful tool that makes our work easier, much like the invention of the wheel, the steam engine or the computer.

It is evolving at tremendous speed. However, if you start to get into it a little more deeply, you’ll soon find that it actually makes a lot of mistakes. And what looked great and excited you at first glance is actually not that great after all.

We are not in danger of being completely replaced by AI at work, but we may be replaced by someone who knows how to use it better than we do, and who will get better results as a result.

The number of tools and applications using AI is growing every day. It is not humanly possible to keep track of everything and know all the latest developments. However, we should not be afraid of AI, but try it out and be aware of how it works, what its possibilities are and, on the contrary, what its limits are and what risks its use brings.

 

Main risks of AI:

  1. Deepfakes and disinformation:

With the development of artificial intelligence, fake photos, audio recordings or videos (collectively, deepfakes) are appearing in ever greater numbers. While sometimes funny and entertaining, they can be misleading, creating misinformation and reducing the overall credibility of information available on the internet in the eyes of the recipients.

  1. Invasion of privacy:

Artificial intelligence algorithms track our web and social media activity to collect data. This can lead to privacy violations and questions concerning the ethics of collecting personal data.

  1. Automation of weapons:

In the military, artificial intelligence is used to automate weapons, and unless it is properly controlled and managed, there are huge risks, both in terms of security and ethics.

  1. Market volatility:

Artificial intelligence algorithms are often involved in the automated buying and selling of assets in financial markets. This can contribute to market volatility and create situations where sudden changes in share prices occur.

  1. Digital dementia:

Handwriting, reading longer continuous texts, navigating maps and terrain without GPS and many other traditional skills are now diminishing or disappearing entirely as a result of digitization and the development of artificial intelligence, which can lead to a general deterioration in people’s mental state, referred to with a slight exaggeration as digital dementia.

  1. The growing influence of megacorporations:

Creating AI models is costly. There are only a few large companies with the capacity to create and maintain their own AI model. This can lead to an imbalance in ownership and influence in the AI field.

 

New and emerging technologies bring with them ethical challenges and societal implications. The TECHETHOS project addresses social awareness and the ethical dimension to selected groups of technologies, such as climate engineering, digital augmented reality and neurotechnology.

TechEthos has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement no. 101006249.