to: THE INNER WORKINGS OF TRANSFORMATION

The processing chain aims to take raw data and, step-by-step, extract the elements necessary for decision-making. After denoising, the filtered data will be used to create coherent sets to be fed to learning algorithms, which will then use other coherent datasets to test performance.

Pre-processing

We design your solution as a processing sequence using a series of operators. Our representation is similar to that found in automaton.

An application is defined as a response to a problem in the form of a sequence of actions. Each action is performed in response to a specific situation and the sequence of those actions leads to the objective.

Classic operators include:

  • denoising,
  • filtering,
  • segmentation,
  • binarisation.

We specialise in:

  • Modelling operators,
  • Optimising parameters in a chain of operators,
  • Implementing real-time operation of a chain of operators.

Creating learning sets

We refer to the set of data characterising the problem to be solved as the “data landscape”.

Characterisation is fundamental to understanding the topology of these landscapes in order to associate ad-hoc classification tools. Moreover, with the slow ageing or sudden collapse of infrastructure, the landscape is changing: data groups are shifting relative to each other, making classification tools obsolete. When it comes to classification of this data, not taking into account this variability of the data landscape results in increased:
  • Confusion measures
  • Ambiguity measures
  • Distance rejection measures
Our characterisations of datasets:
  • Factor analysis
  • Spectral analysis
  • Metrics

Learning

The data is now available in the form of a landscape. We must then determine the homogeneous areas that form categories, which are the object of the problem to be solved.

Learning consists of constructing these boundaries using a suitable algorithm. The strategy for presenting data to the algorithm is fundamental to converging toward successful take-up and not drifting into over-learning.

The result is a software component representative of the dataset and the learning procedures.

As specialists, we have expertise in all effective approaches, such as:

  • Probabilistic (Baye, Markov, etc.)
  • Non-probabilistic (Beliefs, Possibilities, etc.)
  • Connectionist (supervised or unsupervised)