Some thoughts and ramblings on AI
On Thursday, we started the day in an old-school wooden auditorium at the Université de Lyon. Today the lectures were about two fresh topics: AI and microservices. Our group was joined by 20 students from ULyon.
The first speaker was Erik Mannens. He is an associate professor at the University of Ghent and is a researcher for Imec. Erik gave a more conceptual philosophical talk on AI and big data.
Some takehomes from this lecture:
Metadata is more important than one would think at first. To motivate this you can think of any picture. The picture of e.g. an NBA star is not very interesting an sich. More important is that the picture is taken at certain time, that Pierce is the guy playing, what fixture it was etc. You need to keep this in mind while making/using data.
Avoid throwing away data when possible because the data may have a secondary use in the future (which you cannot predict at the moment).
Prof. dr. Mannens gave the example of a company that sells smartwatches and gathers the activity of their users. Based on this activity tracking they can pinpoint earthquakes.
On the graph, one can see different spikes of activity based on the location of the smartwatch wearers. The closer to the epicentrum of the quake, the more people got woken up.
The ethics of AI made decisions. Take the case of a self-driven car that has no other possibility than either running over an old lady or a dog crossing the road. Which choice is ethical responsible?
I found the video about Autodesk really interesting. Autodesk AI helps you in the design, draft, and model of buildings and other structures. The interesting part is that Autodesk uses AI to optimize these structures for you, that you even could not come up with by yourself.
AI does already outperform us…
At the moment AI is already helping us moving forward. Think of the current models that very accurately detect cancer and self driving cars. Erik showed another use case of AI: small AI driven bots ordering of packets in the Amazon’s delivery service
Do we need to be afraid of AI?
Dr. Mannens learned us that the answer is no.
There are actually two big problems at the moment:
the bots are not generalized yet. Maybe one can drive a car, but it is not able to make any other decision.
a bot really needs tons of data in order to actual reach a decent accuracy level. In contrast, we humans (usually) do not have to fail more than a few times just to learn that it’s no good idea to drive our car into a wall for example.
Future seems bright, there are no limits but we cannot neglect the ethics. Is it ‘okay’ that big companies know (almost) everything about us? As a countermove, one could build the applications using decentralized data meaning that you are the only one who has your own data.
By Domien Van Steendam