From Public Research to Private Companies — Improving Data Transfer Knowledge
Efrain Sotelo Ferry · 19 Mar 2026
If you have spent years developing a novel fatigue assessment method, a new wake model, or a better approach to turbulence generation, you know the finish line: peer review, publication, maybe a conference presentation. Then what?
The paper enters a database. It gets cited. Occasionally, someone emails you asking for clarification on equation 14. But the method itself — the thing you spent years refining — rarely leaves the PDF. Industry teams read the abstract, bookmark it, and move on. Not because the work lacks value, but because the distance between a published method and a production-ready tool is enormous.
This is not the researchers' fault. It is a systemic gap in how knowledge transfers.
The Paper-to-Practice Gap
Consider a typical scenario. A research group at a university develops an improved method for estimating blade root fatigue loads under complex inflow conditions, including erosion. The method is validated against field data. The results are published in a respected journal. The code exists as a collection of MATLAB or Python scripts on the lead author's workstation.
Now imagine an engineering team at a turbine manufacturer who would benefit from exactly this method. They read the paper. They understand the theory. But to actually use it, they would need to: re-implement the algorithm from the paper's description, guess at implementation details that did not fit in the publication, validate their re-implementation against the paper's results, and integrate it into their own toolchain. That is weeks of work for an uncertain outcome, so they default to the method they already have.
The research exists. The need exists. The connection does not.
What If Research Were Executable?
The idea is straightforward: instead of only publishing a paper, a researcher also publishes the method as an executable action — a containerized computation step in a Pencilroads public project. The action takes defined inputs (a turbulence box, a structural model, a set of load cases) and produces defined outputs. It is versioned. It runs in a reproducible environment. Anyone can inspect it.
A company can then reference that public action from within their own private workflow. They supply their proprietary data — turbine geometry, site conditions, control parameters — and the research method runs on it. No re-implementation needed. No guessing about what the authors meant by "the modified Goodman correction with a slope of 10." The exact code that produced the paper's results is now processing the company's data.
The researcher shares the method, which was already public in the paper. The company's data never leaves their private project.
The Reproducibility Question
Computational research has a reproducibility problem, and it is not limited to wind energy. A 2019 survey in Nature found that more than 70% of researchers have tried and failed to reproduce another scientist's experiments. In computational fields, the problem is particularly ironic: the work is deterministic, but the environment — library versions, random seeds, data preprocessing steps, undocumented parameters — makes exact reproduction nearly impossible from a paper alone.
Sharing executable workflows rather than just papers addresses this directly. When the method is a containerized action with pinned dependencies and a defined interface, anyone can verify the results. A reviewer can run the action. A competitor can test edge cases. A standards body can include it in a benchmark suite. The work becomes auditable in a way that PDFs cannot be.
A Feedback Loop That Benefits Everyone
When a company uses a research action on real project data — not the clean validation dataset from the paper — they discover things. Edge cases the researcher did not anticipate. Numerical instabilities at extreme wind speeds. Performance bottlenecks with large structural models. This is valuable information.
In the current model, these discoveries are lost. The company works around the issue internally. The researcher never learns about it. With open actions, companies can report bugs, suggest improvements, or even contribute fixes back to the public project. The researcher gets real-world feedback that makes the method better. The company gets an improved tool. Everyone wins.
This is the same dynamic that made open-source software successful. Linux, OpenFOAM, FEniCS, and dozens of other tools started as research projects that industry adopted and improved. The difference here is that the unit of sharing is not an entire software package — it is a single, focused computation step that plugs into a larger workflow.
The Incentive Problem — And a Possible Solution
Researchers are rewarded for papers: citations, h-index, journal impact factor. Writing good software is not part of the evaluation. A researcher who spends six months cleaning up their code and packaging it as a reusable tool gets no academic credit for that work. The incentive structure actively discourages the very thing that would make research more useful.
What if there were a different metric? Not just "how many papers cite your work" but "how many engineering teams ran your method on real data." A public action on Pencilroads could track usage: how many workflows reference it, how many times it has been executed, how many organizations depend on it. This does not replace citations, but it adds a dimension ofimpact that funding bodies and hiring committees could recognize.
A researcher whose wake model action is used by 40 companies across three continents has demonstrably different impact than one whose wake model paper was cited 40 times. Both matter. But only one means the method is actually being used.
Precedents From Other Fields
This pattern is not hypothetical. OpenFOAM began as academic CFD code and is now an industry standard used by automotive, aerospace, and energy companies worldwide. FEniCS started as a university project for solving partial differential equations and is now used in production structural analysis. In machine learning, Hugging Face turned academic model weights into a shared resource that thousands of companies build on.
Wind energy and structural engineering have not had their equivalent moment yet. The tools exist — OpenFAST, WISDEM, QBlade — but the infrastructure for sharing smaller, composable methods (not entire simulation frameworks) does not. That is the gap.
What This Means in Practice
Here is a concrete scenario:
- A research group publishes a paper on a new method for estimating tower shadow effects on blade loads. They implement the method as a Pencilroads action in a public project, with clear input/output definitions and a test case.
- An engineering consultancy working on a wind farm assessment sees the action. They reference it from their private workflow, feeding it their site-specific turbine and layout data. The action runs on their data in their environment.
- The consultancy finds that the method produces unrealistic results for a specific turbine with a large rotor overhang. They file an issue on the public project with details.
- The research group fixes the edge case and publishes a new version of the action. The consultancy updates the reference and reruns their workflow. Both parties are better off.
This entire cycle — from publication to real-world use to improvement — currently takes years when it happens through papers alone. With executable methods, it can happen in weeks.
IP Concerns, Addressed
A fair question: does sharing an action expose intellectual property? The answer is no, for the same reason that publishing a paper does not. The method is already public — it is in the paper. The action is simply the method in executable form rather than in prose form. If anything, the action is more transparent than the paper, because the code is explicit where the paper's description might be ambiguous.
The valuable IP in most engineering companies is not the methods — it is the data, the design parameters, the proprietary configurations, and the engineering judgment about how to combine all of it. That stays private. The public action processes private data without ever seeing it in the researcher's project.
A Bridge, Not a Revolution
This is not about replacing journals or upending academic publishing. Papers remain the primary medium for explaining why a method works, what its theoretical basis is, and how it compares to alternatives. What changes is what happens after publication.
Instead of the method sitting in a PDF, it also exists as a running, testable, reusable piece of software. The paper tells the story. The action does the work.
For researchers, this means their work lives beyond the citation count. For companies, it means access to the latest methods without months of re-implementation. For the field as a whole, it means faster iteration between theory and practice.
The knowledge already exists. The question is whether we build the roads to move it where it is needed.