Our expertise includes a variety of disciplines, with a common theme: Academic passion for challenging data problems and customised solutions for your business.
As the dimension and complexity of data grows exponentially, advanced analytic tools become necessary to extract meaning from this valuable information. Not only are we experts in a broad range of analytic techniques, we are also able to invent and implement new tools depending on the problem at hand.
We offer efficient algorithms to solve risk assessment, classification, prediction, and anomaly detection using "Big Data". We have extensive experience in applying an array of methods, including Neural Networks, Support Vector Machines, Random Forests, Mixture Models, Hidden Markov Models and Dynamic Bayesian Networks, and Ensemble Methods. We can apply these to financial, biological, environmental, business, energy, resource and text-based systems. Besides being able to determine which method is best for you, we are also able to customize and develop new methods in this area, depending on the data problem and questions at hand.
Quantitative risk analysis
We offer quantitative risk analysis approaches which combine probability theory, statistics and machine learning to characterize the risk associated with uncertain events. We are also able to implement our combined risk assessments within project models (e.g. schedule, cost estimates, etc.) relying on mathematical and simulation tools, including Monte Carlo experiments, to calculate the probability and impact of different kinds of events. Quantitative risk analysis predicts likely project outcomes in terms of money or time based on combined effects of risks, and estimates the likelihood of meeting targets and determines resources needed to achieve desired level of performance.
Extreme event estimation
Extreme event analysis (EVA) is a branch of data analysis dealing with extreme deviations from the mean. It seeks to estimate the probability of events that are more extreme than any previously observed. It is widely used in many disciplines, such as structural engineering, finance, earth sciences, traffic prediction, and geological engineering. For example, in the measurement of risk in finance, extreme event analysis enables one to estimate the risk of extreme financial losses from historical data. Similarly, EVA can be used in the field of meteorology to estimate the probability of unusually abundant rainfalls, wind speeds, or temperature extremes. We are actively and passionately researching this area. Our work is currently focused on the development of EVA tools in the context of big data, to predict the occurrence of multiple extremes across a large number of variables.
Network analysis is the process of investigating data structures through the use of networks and graph theory. Statistical network analysis views networked structures in terms of nodes (individual actors, people, or things within the network) and the ties, edges, or links (relationships or interactions) that connect them. Social network analysis has emerged as a key technique in modern science and industry gaining an important role in a wide range of fields including economics, finance, biology, organizational studies, computer science, social psychology and environmental studies.
Predictive modelling and forecasting
Predictive modeling, sometimes referred to as predictive analytics, combines probability and stochastic process theory with statistics to predict outcomes. Often predictive modelling is used to forecast future events and measure their uncertainty. Our expertise in this area enables us to access a wide range of predictive approaches with applications in a numerous areas. For example, we have constructed and applied predictive models to predict crime risk occurrence, environmental and meteorological variations, and cancer cell growth rate in biology.
We are experts in a variety of statistical inference and prediction techniques suitable for the study of spatially referenced data including spatial regression, Markov random fields, kriging, and conditional autoregressive models. Spatial statistics is often applied to problems at the human scale, most notably in the analysis of geographical data such as those arising in environmental sciences and economics. However, the same approaches are useful in any field where data have a spatial dimensions (e.g. astronomy and engineering).
Text data analysis
We use statistical and machine learning techniques to extract meaning from unstructured text data. Data sources include web content, free text, social network data, emails and other forms of non-transactional information. Statistical methods and algorithms enable in text analysis, enables us to transform unstructured text data into easy-to-interpret inferences and predictions. A key area is the ability to mine massive amounts of text and documents and distill the most important elements. For example, in the field of law we can apply methods that mine massive case loads and automatically search for precedence.
The main goal of data visualization is to communicate information clearly and efficiently via statistical graphics, plots and information graphics. It makes complex data more accessible, understandable and usable. We bring you a number of data visualization techniques to help you better understand your data by segmenting them into similar groups, or to find frequent visual patterns in purchase activities, or identify sequences of events that follow consistent patterns. This is especially important when dealing with complex and big data where the number of variables and observations may be overwhelming.
Design of experiments and observational studies
If you do not have data yet, your data should be collected in a curated and principled way in order to avoid information loss. To achieve this we use techniques from 'design of experiments' (DoE). DoE is the area of data science concerned with the planning of any task that aims to describe or explain the variation of information under multiple conditions to answer specific questions. Although DoE is generally associated with traditional experiments or trials in which the design introduces conditions that directly affect the variation and outcome, it may also refer to the design of observational studies (case-control studies, cohort studies, etc.), in which the natural conditions that influence variation are selected for.
Bioinformatics is big data analysis of biological data. We specialise in using machine learning techniques to identify what is causing a particular cancer and then zero in on how to fight it. We specialise in multivariate large scale genetic data and can improve the prognosis, diagnosis and treatment of cancer patients by decoding the genetic landscape and following genetic evolution that leads to resistance through time. By identifying the emergence of resistance before traditional methods, you can change therapies and often prolong a patient's life.