It is time for racquet layup design to adopt an AI/ML-driven design approach

PistolPete23

Hall of Fame
This post is motivated by my rewatching of an excellent TW video that showed an incredibly detailed demonstration of a Head prototype racquet being made. It was mentioned at one point in the video that even with the shapes of the pieces of prepreg that goes into a layup held constant, there are still ~420 billion unique variations of different combinations of fiber orientation of the individual pieces. The static weight, balance, swing weight of each variation would be identical, but the stiffness and harmonics would be different. What this tells me is that R&D cannot afford to be too exploratory; with so many possibilities to try and only a comparably small number of experiments that can be feasibly carried out, you can't afford to deviate too far from what already works. Inevitably, there will be large gaps in the design space that are left unexplored. Take, for example, the Wilson Shift. The novelty of the layup pattern achieves stiff horizontal flex but compliant vertical flex, creating a unique hitting experience. This is just one example of a novel layup; I'd imagine there are a lot more waiting to be discovered. But how do you systematically explore this near-infinite design space? Simulation software as a guiding tool is one approach, but as far as I know, nothing on the market accurately captures the nuanced effects of fiber orientation. I'd like to propose machine learning (ML) as a disruptive design paradigm. The premise of ML is you train a model to learn complex patterns from historical data in order to make predictions on new data. The inputs of the model could be layup pattern and fiber orientation, characteristics of the mold. The design targets that the model predicts could be acoustic properties (characteristic frequency), flex profile, etc. Given a target set of specs, the model will propose some promising layup patterns with varying degrees of accuracy. The prototypes will be made, specs measured, and then the new data is fed back to retrain and improve the model's predictive performance. This manner of sequential learning will be both more efficient and exploratory than the current design paradigm. It has been proven successful in other manufacturing domains; let's bring it into the racquet design industry.
 

esgee48

G.O.A.T.
Teaching cause and effect is not that easy. Someone is going to be responsible for inputting initial sets of data and resultant characteristics. Then this has to be as many times as they have different layups. Perhaps a multi regression analysis could then be run. Assumes consistent fibers, accurate orientation and uniform prepreg which if not true will invalidate results. What could be a major issue is multiple solutions for same characteristics. ML is going to have major problems if this is true.
 

PistolPete23

Hall of Fame
Teaching cause and effect is not that easy. Someone is going to be responsible for inputting initial sets of data and resultant characteristics. Then this has to be as many times as they have different layups. Perhaps a multi regression analysis could then be run. Assumes consistent fibers, accurate orientation and uniform prepreg which if not true will invalidate results. What could be a major issue is multiple solutions for same characteristics. ML is going to have major problems if this is true.
I agree the quality and consistency of data is important - garbage in, garbage out. But from my experience as a machine learning practitioner in the manufacturing space, you’ll never encounter an ideal dataset as you described. There will always be noise when it comes to experimental data; you build models with the assumption that the signal isn’t drowned out by the noise, which is a valid assumption in most cases I’ve consulted in. Also, one of basic components of active (or sequential) ML is leveraging uncertainty quantification, i.e., each prediction will have an associated confidence interval. Having multiple training examples with very different stiffness values for the same layup would contribute to more uncertain predictions, but it’s definitely not a deal breaker for ML unless the issue pervades the majority of the dataset. I do agree that the preparation of the training data is the greatest hurdle, but the same could be said for nearly all industrial use cases for ML. I still think it has potential.
 

veelium

Hall of Fame
For ML to make sense or work well you need a lot of training data. Imo the few hundred different racquets produced are not enough and even for those, there is a lot of variation with QC etc.

That said, they already have programs that predict the behaviour of a frame from the various types of graphite, mold etc.
There was some Artengo presentation where you could see parts of the interface.
 

PistolPete23

Hall of Fame
For ML to make sense or work well you need a lot of training data. Imo the few hundred different racquets produced are not enough and even for those, there is a lot of variation with QC etc.

That said, they already have programs that predict the behaviour of a frame from the various types of graphite, mold etc.
There was some Artengo presentation where you could see parts of the interface.
To get started with active learning, and using an algo like Bayesian optimization, a few hundred initial data points is enough. In each iteration you will have target specs, and the model recommends top xx candidates that you turn into prototype racquets. The data from the prototypes is then fed back into the dataset and the model is retrained. It might take several iterations of this sequence to achieve the targets, but it’ll certainly be more efficient than what the current R&D process is. The Artengo software is definitely interesting. I’d use the predictions from that kind of software as an additional input feature for the ML model.
 
Top