aifora Platform

The aifora platform is the basis for the use of our cloud-based predictive services. Via adapters and connectors, companies can import their internal data into our so-called data refinery and use their data in our applications.

In addition, we also provide our own data sources, such as competitive prices and local weather data, which are linked to the platform and used to determine forecasts.

Once your company is connected, all our services are flexibly available to you.

The important thing is that the effort for the companies themselves is very low and that clear recommendations for action result from the forecasts. The prepared results are made available to users via attractive interfaces or can be reintegrated directly into the internal systems.

data refinery and predictive services based on machine learning algorithms

Basics

Like archaeological artifacts, old data can lead to new facts when analyzed and evaluated with new methods or technologies. Throwing away data that is not needed today would therefore be fatal and would hinder tomorrow’s innovations.

A data lake is not a technology, but a paradigm we follow. Whether structured or unstructured, we store data in its original format to avoid losing information.

Data that is regularly accessed is stored in big data databases so that it is quickly available on horizontally scaled infrastructures for processing in our analyses.

For the high-performance processing of data calculations in real time, we operate a microservice architecture with small data services that perform independent, agile and high-performance calculations on large amounts of data and write results back into our Data Lake.

To ensure that all microservices are harmonious and coordinated, we use appropriate tools for orchestration of the containers.

Thus we avoid the large monolithic systems that were widespread before the cloud era. The flexibility and agility that these modern technologies bring expand our possibilities and create a perfect environment for new and innovative developments.

Technology Overview

The storage of data has increased considerably in recent years. Today, 2.5 trillion bytes of data are generated daily from the many billions of devices connected via the Internet. From 2007 to 2009, just as much data was stored as in the entire period before. In the last two years, it was even nine times as much data as in the entire period before. You don’t have to be a mathematician to recognize the exponential increase in the amount of data stored.

It is becoming increasingly difficult for humans to decipher the value of the data and to derive recommendations for action from the available data. Machine learning means automatically translating data into instructions for action. These methods are indispensable whenever a lot of data is generated and must be promptly evaluated.

To efficiently apply smart analytics to big data, we rely on open source technologies, such as Hadoop and Hive for our Data Lake, as well as commercial providers when they offer a better solution.

aifora data streaming

Data Streaming

  • Apache Kafka
  • Apache Spark Streaming
  • Apache Flink
aifora data lake

Data Lake

  • Apache Hadoop
  • Apache Hive
  • SQL Server & S3
aifora data refinery

Data Refinery

  • Apache Cassandra
  • Apache Spark
  • Python & R
aifora data visualization

Data Visualization

  • R Shiny
  • Tableau
  • Angular JS

Details

Our integration architecture takes over all systematic tasks and processes for the use of the Data Science Services. Using adapters and connectors, companies can import their internal data and have it forwarded to the appropriate services. The integrated data is enriched with external data and used for the optimal determination of forecasts.

To read the company data we use Spark SQL on standard interfaces such as JDBC and ODBC. Thus we enable every customer to use our services and to integrate his data into our Hadoop, NoSQL and In-Memory databases. With Python and R, intelligent algorithms are applied to the "Big Data" and then deliver individual results for each customer via attractive web interfaces written in Java, Java Spring and AngularJS.

 

SaaS stands for "software as a service" and means that applications are made available on demand. SaaS has become a widespread business model, with cloud applications growing in number and variety.

For our customers, this means that they do not have to purchase, set up and maintain their own IT structures. Our cloud-based services automatically scale the necessary resources according to your needs.

SaaS also means always working with the latest software and being able to use new services quickly and easily. Compared to complex internal IT projects with long lead times, a SaaS implementation is quick and easy to complete and ensures that software is implemented before it becomes obsolete again.

Even after our customers are connected to a service, we continue to update the software and infrastructure in regular cycles in order to meet the requirements of a fast-moving market and competition.

Data Science is the science of unlocking the value of data and is most certainly the hype of the 21st century. The applications of data science projects are almost inexhaustible and as abundant as the industries in which they are used. Retailers in particular expect a high potential from the application of intelligent algorithms for optimal decision making. Since these decisions, at least at the beginning, are not made solely by the machine, but will always be a man-machine decision, transparency and comprehensibility of the data science methods used is a central topic.

Solutions developed at aifora combine many years of industry know-how with current data science methods in order to meet market requirements.

How can we best explain to a user the "K-Nearest Neighbor" algorithm, which classifies an article according to its characteristics and sales quotas and assigns it to a reference group? Providing comprehensible answers to such questions  is particularly important to us, because transparency creates acceptance and without acceptance our solutions are not successfully implemented.

 

"Machine Learning is to transform data into actionable insights" This sentence best describes what machine learning is used for. In the end, intelligent algorithms are used to automatically extract information from data so that it can be translated into recommendations for action. This makes it difficult to distinguish it from other similar areas such as data mining, since even there information is extracted from data using intelligent algorithms. We therefore see machine learning in different stages of expansion: In so-called "retrospective analyses", data scientists use intelligent algorithms to search for patterns in historical data. The correct models and parameters are evaluated manually at this point (very similar to classical data mining). In "live operation" intelligent algorithms are constantly supplied with new data and thus extend the model. Here the correct models are already defined, but parameters can still be manually adjusted by the data scientist. In "full automation", not only are the models constantly expanded, but parameters are also continuously adapted independently. At this point one speaks of a "self-adaptive algorithm" and probably corresponds best to what one generally imagines under machine learning.

Machine learning is an elementary component of artificial intelligence, since here too it is a matter of automatically learning from data.

Artificial intelligence, however, is primarily concerned with applications that simulate human-like decision-making structures in a non-unique environment. This can be for example a chat bot in customer support or a speech recognition tool like Siri or Alexa.

A major challenge in the development of artificial intelligence is solving tasks that are easy for humans to perform but mathematically difficult to grasp. This problem has led to neural networks and deep learning. This approach in artificial intelligence uses multi-layered and hierarchically arranged models to solve complex relationships, for example to solve problems in image recognition, where an image initially exists only as a collection of pixels.

More insights? Looking for a tech-talk?