Meta-programming
Many programming language have features that help support meta-programming.
Examples include:
-
templates,
-
macros,
-
reflection/introspection.
Templates and macros can enable computations to be moved to compile-time.
The benefit of this is faster execution at run-time.
The drawback is that code now needs to be written differently depending on when it is to be run.
This can be a problem if we want to use that same code at different times.
If there are two implementations, they risk getting out of sync.
Consider a parser.
A parser can be written in an interpretive style to interpret the rules in the grammar.
If this interpretation is done in templates or macros, the resulting parser will run without interpretive overhead.
The downside is that the parsing algorithm can now only be given a new grammar at compile-time.
It would not be possible to write a diagnostic tool for use during grammar development which used the same parser implementation.
The prospect of dynamic grammars may sound contrived.
For a more concrete example, consider a parser for a binary format, such as an image or video file (or stream).
Binary formats are often highly parameterized, for example, by colour depth.
The header will contain the actualy parameters, the body then conforms to those parameters.
Generating parsers for every possible parameterization may result in prohibitively large executables.
Always consulting the parameters may result in prohibitively long execution time.
Somewhere between the two is a tradeoff.
But where may depend on external factors.
For a diagnostic tool, flexability and universality is more important.
For an embedded device, efficiency for a specific case is more important.
For a format with a stable specification, multiple implementations is a workable solution.
But this isn't ideal.
Not all formats are stable, and even those that are don't typically start as such.
Is it possible to write the code once, independently of how we wish to use it later?
We may wish to specialize the code to some subset of its parameters.
We may wish to run the code on the backend or frontend, inside a database, or target a GPU, NPU, FPGA, or ASIC.
Why not capture all the fiddly details as data (for example as XML or JSON)?
This is a common approach, used in many places.
It starts off well, but fiddly details have an annoying habit of getting everywhere.
The data format can become steadily more complex, and the numerous consumers of this format need to be kept in sync.
What if we could write the code once and be done with it.
Specialzing away functionality we don't need, and translating the result to whichever language we do need,
will, in many cases, produce much the same code we would previously have had to write and maintain in multiple languages.
Component-specific and application-specific code boundaries
The application code connects everything together for a specific purpose.
There are multiple ways of connecting software components together, each with different trade-offs.
Concurrency needs to be handled, and there are multiple approaches.
These differences make it harder to write component-specific code in a way that can be reused.
To some extent, differences can be abstracted over.
For example, should a component write its output to memory, or directly call the next component?
We can have both, at the cost of an indirect function call.
An indirect function call might not sound a great expense, but we also incur lost opportunities for optimizations.
The (intended) Ferrum solution is to use effect-handlers and specialization.
This makes it possible to keep component-specific code decoupled, but generate tightly-coupled code that intermingles code from multiple components.