ACATIS - pioneer of AI in asset management
In the investment industry, ACATIS is also one of the pioneers in the use of artificial intelligence (AI). ACATIS has been conducting research in this area since 2014 in order to utilise it in portfolio management. The first practical application of artificial intelligence at ACATIS took place in 2016.
ACATIS researches
ACATIS has been doing research in the area of AI for about four years, with the aim of using it for portfolio management. At the beginning programmes were used for text analysis purposes, which can search a report for specific key words.
Today, ACATIS works mainly with Deep Learning models, an approach from the area of machine-based learning. This type of artificial intelligence can be compared to a good analyst with years of experience. During the course of an analyst’s career, they can get to know a lot of companies and gain insight into them. As time progresses, they can develop a knack for detecting patterns in companies’ figures and balance sheets. Over time, they learn what features are important, with their experience helping to quickly and better contextualise new situations.
Deep Learning models work in a similar fashion. They learn to independently detect patterns in balance sheets, which they then apply to new data. The more data that is available to the system, the better it can learn and gain “experience”.
At the same time, as the volumes of data increase, so do the demands on processing performance. The two big advantages of Deep Learning models as compared to an analyst are their much greater capacity and emotional detachment. The system can also find patterns that humans would not be able to detect. In addition, it makes decisions strictly on self-generated rules, leaving the emotional aspect aside.
The Beginnings of Artificial Intelligence
Artificial intelligence (AI) as a scientific field of study came into being in 1956 as part of the Dartmouth Summer Research Project in Hanover, New Hampshire (US). In this project, pioneers from the computer science field, such as Marvin Minsky, came together to answer the question whether it was possible to create thinking computers. The grant application reads: “[...] The research project will try to find out how machines can be programmed to use language, to make abstractions and to develop concepts to solve problems of the type that at present can only be solved by people, and to continuously improve themselves. […]”.* For many years, it was thought that computers must be given a sufficiently large number of explicit rules in order to create a human-like AI. Today‘s AI systems do in fact learn their own rules independently, but they can only carry out one defined special task as well or better than a human. This means that over time, AI has changed from a statistical approach to a machine learning approach.
Machine Learning
Machine learning is a sub-segment of AI, in which machines learn from data without being explicitly programmed with if-then rules. The key word in this context is “learning”. Machine learning models adapt by repeatedly observing the data and the associated expected answers to improve their performance - just like humans. The machine creates ist own set of rules to perform a pre-defined task. Once something is learnt, the machine can apply the knowledge it has built up to unknown data and generate its own answer. An important aspect in learning is that the data is not simply memorised, because then the machine‘s responses would not be useful.
Deep Learning
Deep Learning is a special machine learning method that has grown in popularity since 2010. The Deep Neural Network (DNN), which derives ist name from the multiple, deeply graduated layers that make up the network, has become one of the most popular concepts of the last few years. In the 1940s, Warren McCulloch and Walter Pitts first described the concept now known as the neural network. With increasing depth, the correlations and Patterns that are learnt in the layers become ever more complex. In the case of image recognition, simple correlations in the first layers may consist of corners and edges, while in the deeper layers previously learnt correlations are combined and entire objects are recognised. Already today, Deep Learning models are able to classify and create images, detect and translate language, Diagnose illnesses, drive cars or play board games as well or better than humans. AI has already become a part of our daily life.
*Source: https://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html
The success of deep learning
Why Deep Learning? Why now?
WSince the 1950s, AI research has twice undergone a period of great optimism followed by disappointment and scepticism due to the lack of success, which caused research financing to dry up. Today, this period is described as the two “AI winters”. But why is it that the Deep Learning principles that were already studied in the 1990s could not be translated into significant success stories until the second decade of the 21st century? During the past three decades, technical advances in three areas led to the successful application of Deep Learning:
1. Hardware
The processing performance of computers has increased exponentially. Now, small Deep Learning models can already be trained on a laptop with a conventional processor (CPU). This would not have been possible 25 years ago. It must also be noted, however, that through the development of graphic processors (GPU) for the computer gaming industry, companies such as NVIDIA and ADM have played a much larger role in promoting research into AI. A small number of GPUs replaces entire clusters of CPUs. If processing power grows by a factor of ten every three to five years, it will increase by a factor of one Million to 10 billion over 30 years. Quantity is turned into quality. ACATIS uses computers with NVIDIA graphic processors to train Deep Learning models.
2. Data
The availability of data (key word: Big Data) has steadily increased, also because of the internet. ImageNet, for example, is a data set of 14 million images that is used in research settings for detecting images and for an annual Competition with Deep Learning. For the last 15 years, ACATIS has been building a Company database of financial data going back to 1986. The database maintains 50,000 companies with more than 1,000 different factors. Data processing has also become easier. Previously, company data was manually entered into the Computer to perform regression analyses. Today, we feed our Deep Learning models with large data volumes that are automatically supplied by different data providers. We are proud when our ACATIS systems analyse annual reports the second they are published.
3. Software
It was not until sufficient processing power and data became available that Deep Learning models could be trained not just in a theoretical but also in a practical sense. During the last few years, it also become possible to make practical improvements to algorithms. This in turn has led to better applications. In addition, software has become more democratic. While in the past codes had to be entirely programmed from scratch, today there are freely available programming packages such as Theano, TensorFlow or Keras (written in the programming language Python), which have simplified Deep Learning research. The growing success of Deep Learning has also garnered corporate interest in the results. For example, in 2013 the Deep Learning start-up DeepMind was acquired by Google for USD 500 million. Today, it is not just a few hundred scientists working on Deep Learning, but tens of thousands. The primary research areas are the US, Canada and China.
Volume of data created, captured, copied and consumed globally
from 2010 to 2025 (in zettabytes)Source: www.statista.com; * Forecast
The Artificial Intelligence Model
Model structure
The Artificial Intelligence model consists of three building blocks and is inspired by the human brain. An input layer receives the data, several interim layers process the data, and the output layer presents the result. The various layers consist of calculation units (neurons) that are linked to each other via Connections (axons). They forward the Information and are weighted in accordance with their information. The process is complex and very elaborate, and humans still play a key role in defining the architecture of the system. They define the variables, learning parameters, sub-models, target and error functions, and they execute the pre-processing and data transformation steps. Error searches are also done by humans. Simply importing the data into a self-learning model, leaning back and waiting for a brilliant result - that is not how it works.
How does this Type of Learning work?
Suppose a model is to learn to distinguish between dogs and cats in pictures. First, we need learning or training examples. In this case, a Training example consists of a model input - a picture of a dog or a cat - as well as the correct output for the model, which is defined by a human (i.e. a picture that shows a dog or a cat). Using these correct training examples, the model learns by repeatedly and gradually changing the weightings in the model based on the errors it makes during these Training examples - a dog is erroneously classified as a cat, and vice versa. The learning process must ensure, however, that the model does not simply memorise the images (overfitting), since in that case it cannot be applied to new, still unknown images.
If you would like to test how artificial intelligence works, you can do so in a fun way at https://playground.tensorflow.org.
Quant versus Artificial Intelligence – a Misunderstanding
What is the difference compared to a quantitative investment approach?
A quantitative investment fund usually involves the use of simple rules that were created by humans after the fact (e.g. with a linear data analysis) and that are applied statically, and usually optimised for a certain point in time. Frequently, factor investments are based on just one dominant key indicator (Bottom image series). An artificial intelligence model, on the other hand, finds non-linear correlations on its own and continuously adapts to market conditions through self-learning, without forgetting past events (time series optimisation). In this case, humans do not specify the rules. Non-linearities are essential for the formation of the model. Different key indicators are linked to each other in almost any combination. Neurons in neural networks, for example, specialise in detecting certain details, which are later included in the overall assessment. For example, a neuron could observe a certain correlation between Sales revenues and profit, while another neuron reacts to a particularly high Level of growth, margin and market share. The entire model is the result of the interaction between all of the Neurons.
Quant Investment vs. Deep Learning
Old quantum world vs. new AI world
Old quantum world
- Model building by humans
- Model based on a priori views on how markets work
- Convictions / drivers are implemented
- Implementation of ‘if-then’ scenarios
- Backtesting – look-ahead bias
- Fixing the model and going live with it
Stratic rule-based strategies
New AI world
- Model building through AI
- Model not based on a priori views of how markets work
- Extracting features from data
- Create model from data
- Walk-forward testing – no look-ahead bias
- Model learns and adapts continuosly
Adaptive self-learning startegies