Mykola Maksymenko: Machines Still Need Human Creativity

30 Nov, 2017

Mykola Maksymenko is a Research Lead in SoftServe’s R&D lab. He has a PhD in Theoretical Physics and before joining SoftServe he did extensive research at the Max-Planck Institute for Physics of Complex Systems (Germany) and at the Weizmann Institute of Science (Israel). Recently, as his focus moved towards the physics of deep neural networks, he switched gears from academic research and joined SoftServe to put his theoretical expertise in practice.

We went into detail with him about the Artificial Intelligence (AI) and Machine Learning (ML) trying to understand how it will influence our future.

Most of us already have a smart phone. It comes with a large set of sensors which allow you to measure your movements, heart rate, typing patterns, voice intonations, etc. Add a wellness wristband to it and a bit of AI, and you get a device to monitor a person’s condition, including their blood pressure and stress level, which advises you when it is better to do some exercise or to call a doctor. AI is the driving force behind every technological idea that we develop in R&D. A surveillance camera in a corridor, for example. Instead of just passively recording the video stream, it can recognize objects and faces, activity, dangerous situations and the emotional state of employees. These are examples of projects that we’re working on.

At R&D, I design AI pipelines for various products. We focus on healthcare, fintech, and retail applications, but some of our experts concentrate on biometric security and sensing. The latter involves heavy machine learning to find unique patterns in biological signals identifying different people. It came as a spinoff from our work on healthcare projects in which we designed the recognition of various diseases and human body states based on data from biosensors.

Empowering products with AI

AI is a set of algorithms which makes up the core of your smart devices. This could be a big core located in the cloud, able to learn from millions of connected devices, or a small one integrated into a chip and performing a single task it was trained to do. For example, every digital camera focuses on the faces of the people, even the cheapest models can do this. This is a kind of AI algorithm invented 16 years ago which performs so good that we can use it as a black box without the need to develop it any further. On the other hand, we see AI translation systems which are often so ineffective that we need to constantly correct them. These corrections are sent back to the training algorithm to improve these systems.

My job is to develop the architecture of ML systems for different problems and products. In most cases, you can’t take data and immediately get a result. The first step is – the preprocessing of data which teaches your model to learn in the most efficient way. Next, the right model is chosen or designed. This could be some neural network, a Naïve Bayes classifier, support vector machines etc. It all depends on the problem and the amount of data out there. The model then has to be trained with the available data, which is the mathematical exercise of finding the set of parameters giving the best prediction. In case the model doesn’t work at all, we start all over adjusting it again and again. This process requires a lot of intuition and experimentation to get the best results and is very reminiscent to actual experimental science. Finally, the hardest part is the deployment of the system to a product that is not cloud based. Since there is no sense in dragging all the unnecessary elements to the product, we usually use scientific frameworks for model training and evaluation.

Deep Learning awaited data and computer power

Ancient AI was based on simple rules and observations. Our wall clocks predict time and hence are the most common examples of predictive machines. Greeks had mechanisms for predicting the position of different planets and stars to help sailors navigate the seas. The beginning of the 20th century already fabricated a chess-playing machine – unless you looked inside, it seemed smart enough despite being entirely if-else rule based. In the 19th century, Legendre coined the regression analysis method that allowed us to approximate models from training examples. Now, modern AI models are based on the statistical machine learning theory. In simple words, this theory states how much different models can learn and generalize, and how their performance depends on the number of training examples. All the recent buzz about AI is mostly due to the progress in deep neural networks. While these models existed already a long time ago, they were so hungry for data and computer power that only after 2010 scientists rediscovered them. Data became available due to the internet, while the gaming industry led to the existence of powerful video cards which benefit the training of neural nets. All progress in computer vision, translation and speech recognition is empovered by Deep Learning algorithms.

Researchers kung-fu

I’ve turned to AI naturally. I worked on statistical physics, which essentially gave rise to early neural network models. In Israel, there is a strong community studying its physics properties and I noticed that it overlaps a lot with my own research on disordered systems. I followed the mainstream drive a bit and completed some online courses as well as a statistical learning course at the Weizmann Institute. That’s when I started to play with my own research ideas.

Since AI is a very lively science, I was not too afraid to suddenly switch gears and move into the tech industry. Here, the lifecycle of research is very short, and hence, intriguing. Unlike my long-term academic research in physics, new interesting ideas in tech can be implemented in a few months or even a few weeks.
Now we receive feedback very fast. Some of the products that we develop in R&D already enter the market within a few months or years. In R&D, we implement some cutting-edge solutions. It’s not really a commodity that offers many off-the-shelf solutions in the cloud. We engineer new models and develop new processes for business verticals.

My usual day starts with reading new scientific papers and grasping new ideas which I immediately apply in my projects.

If your job is creative, it is safe

I believe that AI will make the world better. It frees people from unnecessary work. Everything that can be automated should be automated. If a computer can summarize texts better than a human, it should do it. If a computer is better at driving a car, it should do it. Many administrative jobs could be eliminated, allowing these people to switch to far more creative activities. Being creative is and will stay difficult for machines, at least for the next few decades. Creativity is driving the world’s progress, and this is the reason why humanity needs to concentrate on it.

The modern world is already filled with various examples of narrow AI; like language translation, knowledge-based searches, contact lenses with face recognition, etc. The implementation of various invisible interfaces reading signals directly from our body and brain will feel as just another evolutionary step.