I am currently using jafit (version 0.1.1) to fit my magnetic hysteresis data (approx. 250 measurement points) with the Venkataraman model. However, the fitting process has been running for over 18 hours without completing, which seems unusually long.
Here are the details of my setup and command:
Input data file: my_BH_curve.csv (contains 250 points of H and B values)
Yes, it is normal. Full optimization may run for up to a week, but usually you get a passable approximation in a few hours already. Do check the intermediate results stored in the current working directory to see how the optimization is progressing.
It doesn’t affect the runtime in any noticeable way. 250 points is fine, no manual tuning is needed. Setting it to a lower sample count is not going to change anything. Most of the time is spent solving the JA equation per step rather than evaluating the loss function; you can also see it from the logs.
The recommended best practice is to let the tool do the right thing by default.
Yes! If you have a good initial guess (do you?), you can skip the global optimization and go straight to local refinement. You can achieve that by setting stage=2. This is mentioned in the README btw.
This will backfire if your initial guess is not good; the tool will likely converge to a local minimum.
No.
You need a computer with a fast processor that performs well in single-threaded workloads. The performance of other components is not important because the problem dataset is small and should fit comfortably in the cache.
You need a recent Python.
You should not run it on Windows. Empirically I found that the tool runs a little faster on GNU/Linux, but I never bothered to investigate why.