A domain adjacent to Machine Learning — Principle Driven Problems

Oct 17, 2022 | Jagannath Rajagopal | 4 min read


One aspect of grad school I still find memorable was the emphasis on Operations Research, Systems Engineering and real world application. It was the first time I was introduced to methods like Optimization, Simulation, & Regression; along with learning & search, we at Kado call them Hero Methods.

There was a continuum between academia and industry with much of the research work being funded by companies or by the government. We got to see many of the methods we learned applied in the context of real problems in Manufacturing, Warehousing, Distribution, Logistics, Finance, algorithms and other areas.

I saw this amazing world open in front of my eyes — with this diaspora of methods that solved various problems. Regression was one method taught in one class. Today though, Machine Learning is primary. It’s the one that took all others mainstream.


For the sake of argument, put problems in Machine Learning & Data Science into a single bucket — Data driven problems. These you are probably familiar with. Big data, Data Science & Engineering, Structured vs unstructured, and so on.

There is another category — problems driven by principles, methods, algorithms, and math; laws and theorems take precedence over data. Principle-driven problems. These are complex math systems in different domains — Engineering design, Drug Design, Financial Markets, Supply Chain Management, Social Systems, etc — modelled by math based on principles, and laws.

The methods I mentioned before — optimization, simulation — even machine learning — are highly relevant here. They make for effective support personnel.


I’ve created a LOT of resources on this topic. Here’s my course on Design Thinking for Hero Methods. Here’s my YouTube channel with previews of course videos. Here’s my website; navigate through the courses to find free previews & pdfs.

- - -

An optimization wrapper can pleasantly surprise you with a calibration you weren’t expecting. Complex math systems have parameters that need setting. Such a task may be set up as an optimization problem (or search), if you can mathematically express goals for system performance, as well as constraints. The idea may not even be to find the best answer but something that is directionally correct. Methods in Search & Planning, and Heuristic Optimization are candidates to tune and calibrate math systems.


A simulation could surface emergent behaviour that is insightful. Sometimes, you want to know how systems perform under different conditions. Math systems may have inputs and outputs, along with one or more process steps. If things are not linear, it may sometimes not be easy to detect and predict behaviour, especially if emergent. That’s where a simulation wrapper could really help with things. By running simulations — even with input settings that are reasonably well known — surprising interactions and behaviour can emerge.

A surrogate learner can act as an effective filter. Surrogates can replace math systems and act as initial filters, especially if computation is very resource intensive. Simulations are a great example of this; one of the challenges in simulation is to determine which sets of values the input features need to be simulated. Without some sort of useful guide, the task would be to simulate all combinations — i.e. the full feature space. As you can imagine, this is nigh impossible as well as highly inefficient. By training a learner on past simulation data, the resulting model can act as a surrogate for a simulation, and only identify those input combinations that need to be simulated. Of course you’ll still need to simulate those, but as you can see, the number of simulations can be drastically reduced.


- - -

At Kado, one of the things we teach is how to design wrappers around complex math systems. If you take a Machine Learning problem as a complex math system, configuring & training such a system involves parameter setting. Not surprisingly, the most popular training methods — gradient-based and/or evolutionary — involve an optimization process. Seen from this angle, an optimizer may be used to calibrate a machine learning model. This alone is proof of the power of Hero Methods as wrappers.

Don't hesitate!

Design Thinking for Hero Methods

Created with