JAfit - running too long?

Hello jafit development team,

I am currently using jafit (version 0.1.1) to fit my magnetic hysteresis data (approx. 250 measurement points) with the Venkataraman model. However, the fitting process has been running for over 18 hours without completing, which seems unusually long.

Here are the details of my setup and command:

  • Input data file: my_BH_curve.csv (contains 250 points of H and B values)

Command : jafit model=venk ref=“my_BH_curve.csv” H_amp_min=10 H_amp_max=400 M_s=342000 a=560 k_p=4194 alpha=0.33596 c_r=0.2

  • The program logs indicate it is using bounds like M_s_min ~ 342110 and that it is performing global optimization with differential evolution.

  • I did not specify num_points, so the default internal resampling might be applied.

  • The dataset and initial guesses are scaled according to the units in my measurements.

  • No errors besides the very long runtime; the program seems stuck or extremely slow.

My questions:

  1. Is it normal for jafit to take this long on ~250 data points, or could this indicate an issue with the optimization setup?
  2. How does num_points affect runtime and fitting quality? Would setting it explicitly to a lower value (e.g., 150) help?
  3. Are there recommended best practices for initial parameter guesses and bounds to ensure reasonable runtimes?
  4. Does jafit support switching to faster local optimizers or simpler models for quicker fits?
  5. Could my data scale or format be causing inefficiencies? Are there preprocessing steps you recommend?
  6. Are there known performance bottlenecks or logs/debug flags I can enable to diagnose slow fitting?
  7. Any suggestions to speed up fitting while maintaining accuracy?

Thank you very much for your assistance! I look forward to your guidance on improving the fitting runtime.

Best regards,

Yes, it is normal. Full optimization may run for up to a week, but usually you get a passable approximation in a few hours already. Do check the intermediate results stored in the current working directory to see how the optimization is progressing.

It doesn’t affect the runtime in any noticeable way. 250 points is fine, no manual tuning is needed. Setting it to a lower sample count is not going to change anything. Most of the time is spent solving the JA equation per step rather than evaluating the loss function; you can also see it from the logs.

The recommended best practice is to let the tool do the right thing by default.

Yes! If you have a good initial guess (do you?), you can skip the global optimization and go straight to local refinement. You can achieve that by setting stage=2. This is mentioned in the README btw.

This will backfire if your initial guess is not good; the tool will likely converge to a local minimum.

No.

  1. You need a computer with a fast processor that performs well in single-threaded workloads. The performance of other components is not important because the problem dataset is small and should fit comfortably in the cache.
  2. You need a recent Python.
  3. You should not run it on Windows. Empirically I found that the tool runs a little faster on GNU/Linux, but I never bothered to investigate why.

This forum is not a good place for this conversation; if you have anything to add, please open a new issue at GitHub - Zubax/jafit: Jiles-Atherton system identification tool: Given a B(H) curve, finds the Jiles-Atherton model coefficients. Supports various JA model formulations. Has an interactive GUI.. Thanks!