What is fine tuning in physics/science?

  • Thread starter Lynch101
  • Start date
  • Tags
    Parameters
  • #1
Lynch101
Gold Member
772
85
This isn't intended to be a discussion of the philosophical "fine tuning argument", rather, I'm hoping to fact check myself as to what is meant by "fine tuning" in physics.

My understanding was that fine-tuning is the process in which parameters of a model must be adjusted very precisely in order to fit with certain observations.

Essentially the idea that the values for certain parameters must be "put in by hand" as opposed to being necessitated/predicted by the mathematical model.

I haven't been able to find a reliable reference to support that idea though. I searched the forums and saw quite a few threads discussing "fine tuning" but from a cursory skim, they seem to take fine tuning as understood.
 
Physics news on Phys.org
  • #2
Lynch101 said:
very precisely
This is the important part. While a fine-tuned theory is not incorrect, it leaves open the question of “why the specific values?”
 
  • Like
Likes Lynch101
  • #3
Frabjous said:
This is the important part. While a fine-tuned theory is not incorrect, it leaves open the question of “why the specific values?”
Yep, that's the deeper question.

Have you seen a reliable source for that definition of fine tuning, or where it is used in a paper?
 
  • #4
Lynch101 said:
Yep, that's the deeper question.

Have you seen a reliable source for that definition of fine tuning, or where it is used in a paper?
I do not believe that there is an accepted definition of fine tuning; it is more of an interpretative tool. You are going to need someone one more theoretical than me to quantify it. Sorry.
 
  • Like
Likes Lynch101
  • #5
Frabjous said:
I do not believe that there is an accepted definition of fine tuning; it is more of an interpretative tool. You are going to need someone one more theoretical than me to quantify it. Sorry.
Cheers Frabjous.
 
  • #6
Fine tuning is a lot like curve fitting but on a deeper scale.

One understands curve fitting as mapping a curve to the same collected data.

Fine-tuning involves mapping a model to experimental data, realizing that while it can be a useful tool for predicting certain values, it lacks a deeper understanding of why things are as they are.
 
  • Like
Likes Astronuc and Lynch101
  • #7
jedishrfu said:
Fine tuning is a lot like curve fitting but on a deeper scale.

One understands curve fitting as mapping a curve to the same collected data.

Fine-tuning involves mapping a model to experimental data, realizing that while it can be a useful tool for predicting certain values, it lacks a deeper understanding of why things are as they are.
Thanks jedishrfu.

Have you come across anything in any paper or even interview which outlines it like that.

My reason for fact checking myself is that it came up in a discussion and I was trying to find a reference for my outline of FT.
 
  • #8
Sadly, I have no specific reference. At work, the physicists would develop an acoustic model of some seasonal environment and then use real measurement data to calibrate the model.
 
  • Like
Likes Lynch101
  • #9
jedishrfu said:
Fine-tuning involves mapping a model to experimental data, realizing that while it can be a useful tool for predicting certain values, it lacks a deeper understanding of why things are as they are.
Yes, if one's finely tuned model provides 'better' predictions as verified by experimental or observational data, then the fine-tuning was appropriate. Models give the what, but not necessarily the why.

We often discuss empirical, semi-empirical models as opposed to mechanistic or physics-based models. The former refers to loosely fitting the data - with a curve, i.e., an equation, that results in a reasonable prediction - even better when there is an accurate predication outside of the range of one or more independent variables. The latter refers to a deeper understanding of the underlying physics, which may require more independent variables or finer details.

An example in materials would be modeling at the atomic level to explore why ruthenium as a substitute for iron gives improvements in certain thermo-physical, thermo-mechanical and thermo-chemical properties in different alloys, and what is the functional dependence on the atomic fraction. Part of the effect is the size (atomic radius and atomic mass) of the atoms, and another part is the electron configuration and 'atomic potential'. The electron configuration for Ru ⦏Kr⦐4d⁷5s¹ is significantly different than that of Fe, [Ar] 3d⁶ 4s². The nuclear mass, nuclear charge and electron configuration are key factors (and then there is the isotopic factor).

The same effect generally applies to all the elements in periods 4, 5 and 6; and then there are the lanthanides and actinides, and beyond.

Modeling subatomic reactions, planetary weather (atmospheric dynamics/physics) and stars (atmospheric dynamics and stellar evolution), and even galaxies are other challenging examples.
 
  • Like
Likes Lynch101 and Lord Jestocost
  • #10
Astronuc said:
We often discuss empirical, semi-empirical models as opposed to mechanistic or physics-based models.
Sometimes I've seen a model advertised as "derived from first principles."

In engineering we sometimes called a result "screwdrivered" when an input value is varied in order to obtain the desired result. This term I believe comes from electronic equipment where various resistances, capacitances, etc. can be adjusted (e.g., a potentiometer turned with a screw driver).
 
  • Like
Likes Lynch101 and Astronuc
  • #11
Todays LLMs are examples of models that are tuned, retuned and fine tuned to express the best output for a given area. The tuning is done through application of lots of data. Some of the data is screened beforehand to coax the model in some direction. There are also various processing steps being done as the data works through the model before you see an outcome.

In the end, we have yet to learn how the model knows what it knows, which is why LLMs are always suspect when giving answers.

I worked in a data mining job where data was used to train a model to determine who might leave a bank. This process is known as attrition, and it looks at customer attributes such as age, marriage, kids, and investments. From that data, one can extract those who left the bank to train/skew the model to identify them. The model would then predict how current bank customers might react and when they might leave.

The problem came when the bank decided to develop a bank loan model and then had to explain to the government how these people were selected. One solution was to model the model with SQL statements that clearly showed how a given customer was selected for a loan or lower interest rate offer. The SQL ran faster than the model and was used in the actual selection. It wasn't as accurate but it did the job.
 
  • Like
Likes Lynch101 and Astronuc

Similar threads

Replies
5
Views
2K
Replies
53
Views
10K
Replies
2
Views
2K
Replies
4
Views
2K
Replies
2
Views
3K
Replies
2
Views
2K
Replies
17
Views
2K
Back
Top