Skip to content

feat(pt_expt): add DeepSpin support for pt_expt backend#5370

Open
wanghan-iapcm wants to merge 13 commits intodeepmodeling:masterfrom
wanghan-iapcm:feat-pt-expt-deepspin
Open

feat(pt_expt): add DeepSpin support for pt_expt backend#5370
wanghan-iapcm wants to merge 13 commits intodeepmodeling:masterfrom
wanghan-iapcm:feat-pt-expt-deepspin

Conversation

@wanghan-iapcm
Copy link
Copy Markdown
Collaborator

@wanghan-iapcm wanghan-iapcm commented Apr 4, 2026

Summary

  • Add DeepSpin (spin model) support for the pt_expt (.pt2) backend: C++ API, Python inference, LAMMPS pair style, and model deviation
  • Fix add_mag bug in dpmodel SpinModel: atomic virial used add_mag=False while PT backend used add_mag=True, causing inconsistent per-atom virial between .pth and .pt2
  • Add "type": "spin_ener" to spin model serialization for correct deserialization dispatch across all backends
  • Replace committed .pth model file with canonical .yaml (dpmodel serialization); .pth and .pt2 are now regenerated from .yaml via convert_backend at test time

Test plan

  • C++ spin tests: 38 passed (12 TF-only skipped)
  • LAMMPS .pt2 spin tests: 7 passed (PBC + NoPBC + model_devi)
  • LAMMPS .pth spin tests: 3 passed (single-model; model_devi needs TF .pb)
  • .pth vs .pt2 agreement: all quantities diff ~1e-15
  • CI: full test suite

Summary by CodeRabbit

  • New Features
    • Spin-capable models: public queries for model spin presence and per-type spin usage; PyTorch-exportable spin backend and new export/run backend for spin models; serialized spin model discriminator added.
    • Output pipeline extended to include magnetic derivatives, magnetic virial counterparts, and per-atom magnetic masks.
  • Tests
    • Added extensive unit and regression tests, new test suites and generators for spin models, updated golden reference data, and model-generation scripts.

Han Wang added 5 commits April 3, 2026 19:23
Add end-to-end spin model support for the pt_expt (.pt2/AOTInductor)
backend, enabling C++ inference of spin energy models.

- Python serialization: handle 7-arg spin models (coord, atype, spin,
  nlist, mapping, fparam, aparam) in _trace_and_export, strip shape
  assertions for NoPBC compatibility
- Python inference: add _eval_model_spin() in DeepEval, extend
  communicate_extended_output for magnetic derivatives (force_mag,
  virial_mag, mask_mag)
- C++ backend: create DeepSpinPTExpt class with spin extension for
  ghost atoms, factory dispatch in DeepSpin.cc
- Shared utilities: extract read_zip_entry, JsonParser,
  buildTypeSortedNlist to commonPTExpt.h
- Tests: C++ gtest (PBC/NoPBC, double/float, standalone/LAMMPS nlist),
  Python pytest (DeepPot API with .pt2/.pte spin models)
- Model generator: gen_spin.py for reproducible test model creation
…n tests

Add per-type use_spin query to both C++ and Python inference APIs so that
consumers can determine which atom types have spin enabled. Also add C++
model-deviation tests for .pt2 spin models and LAMMPS-level spin tests
for the .pt2 backend.

- Store use_spin in .pt2 metadata and expose via get_use_spin() in C++
  (DeepSpin, DeepSpinModelDevi, DeepSpinPTExpt, DeepSpinPT stub)
- Add get_use_spin() to Python DeepEvalBackend/DeepEval across all
  backends (pt_expt, pt, dpmodel, pd)
- Create gen_spin_model_devi.py for two-seed spin .pt2 model generation
- Add test_deepspin_model_devi_ptexpt.cc C++ model deviation tests
- Add test_lammps_spin_pt2.py (PBC) and test_lammps_spin_nopbc_pt2.py
- Add use_spin tests for both spin and non-spin models in Python
…spin

# Conflicts:
#	deepmd/pt_expt/infer/deep_eval.py
pt_expt does not support MPI yet, so remove test_pair_deepmd_mpi
from both test_lammps_spin_pt2.py and test_lammps_spin_nopbc_pt2.py.
Also remove the now-unused imports (importlib, shutil, subprocess,
sys, tempfile).
- Add "type": "spin_ener" to spin model serialize() across all backends
  (dpmodel, pt, pt_expt) so BaseModel.deserialize() can dispatch correctly
- Fix dpmodel spin_model add_mag=False -> add_mag=True for atomic virial
  in both call_common and call_common_lower, matching the pt backend behavior
- Handle spin model deserialization in pt's deserialize_to_file and
  dpmodel's BaseModel.deserialize
- Replace committed .pth with canonical .yaml format; gen_spin.py and
  gen_spin_model_devi.py now regenerate .pth/.pt2 from .yaml via
  convert_backend
- Update all test reference values (C++, C API, LAMMPS) for the
  regenerated models
@wanghan-iapcm wanghan-iapcm added the Test CUDA Trigger test CUDA workflow label Apr 4, 2026
@wanghan-iapcm wanghan-iapcm requested review from OutisLi and iProzd April 4, 2026 14:04
@github-actions github-actions bot removed the Test CUDA Trigger test CUDA workflow label Apr 4, 2026
@dosubot dosubot bot added the new feature label Apr 4, 2026
@wanghan-iapcm wanghan-iapcm changed the title feat(pt_expt): add DeepSpin support for .pt2 backend feat(pt_expt): add DeepSpin support for pt_expt backend Apr 4, 2026
Comment on lines +133 to +135
from deepmd.dpmodel.model.spin_model import (
SpinModel,
)
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3c5a29f3dc

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 4, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds spin-aware model serialization/deserialization, new spin query APIs (get_has_spin / get_use_spin / use_spin), magnetic-derivative propagation, a pt_expt (.pt2) AOTInductor spin-capable backend, C++ API/backends for spin, and many test and test-generation updates.

Changes

Cohort / File(s) Summary
DeepEval APIs
deepmd/dpmodel/infer/deep_eval.py, deepmd/pd/infer/deep_eval.py, deepmd/pt/infer/deep_eval.py, deepmd/infer/deep_eval.py, deepmd/pt_expt/infer/deep_eval.py
Added get_use_spin() to backends and DeepEval.use_spin property; pt_expt exposes get_has_spin/get_use_spin and routes eval to spin/non-spin paths.
Model serialization / SpinModel
deepmd/dpmodel/model/base_model.py, deepmd/dpmodel/model/spin_model.py, deepmd/pt/model/model/spin_model.py, deepmd/pt_expt/model/spin_model.py
SpinModel.serialize() includes "type": "spin_ener"; deserializers defensively strip optional "type"; BaseModel.deserialize special-cases "spin_ener" to call SpinModel.deserialize.
Output transform (magnetic derivatives)
deepmd/dpmodel/model/transform_output.py
Transforms now compute/scatter magnetic derivative tensors (*_derv_r_mag, *_derv_c_mag) and propagate mask_mag into local-atom outputs when present.
pt_expt export & serialization
deepmd/pt_expt/utils/serialization.py, deepmd/pt_expt/infer/deep_eval.py
Export/import detects type == "spin_ener", deserializes as SpinModel, builds spin-inclusive sample inputs (ext_spin), strips shape-assert nodes from exported graphs, and collects spin metadata (ntypes_spin, use_spin).
C++ pt_expt backend & shared utilities
source/api_cc/include/DeepSpinPTExpt.h, source/api_cc/src/DeepSpinPTExpt.cc, source/api_cc/src/commonPTExpt.h, source/api_cc/src/DeepPotPTExpt.cc
Added DeepSpinPTExpt AOTInductor loader with spin-aware init/run/extract; added commonPTExpt.h (JSON parser, ZIP reader, buildTypeSortedNlist) and refactored DeepPotPTExpt to reuse utilities.
C++ public API changes
source/api_cc/include/DeepSpin.h, source/api_cc/include/DeepSpinPT.h, source/api_cc/src/DeepSpin.cc
Added get_use_spin() to backend/public interfaces; .pth backend returns empty vector; DeepSpin::init supports PyTorchExportable branch.
Tests & test generation (Python/C++/LAMMPS)
source/tests/pt_expt/infer/*, source/api_cc/tests/*, source/lmp/tests/*, source/install/test_cc_local.sh, source/tests/infer/*, source/tests/infer/gen_spin*.py, source/tests/infer/gen_common.py
Added many spin-focused tests and generators (gen_spin.py, gen_spin_model_devi.py), updated golden references, added YAML model fixtures, and new C++ test harnesses for pt_expt spin backend.
Minor runtime/backends (pt/pt2/pd)
deepmd/pt/utils/serialization.py, deepmd/pt/infer/deep_eval.py, deepmd/pd/infer/deep_eval.py
.pth deserialization now recognizes spin_ener and dispatches to SpinModel; backends provide default get_use_spin() returning empty list when metadata absent.
Tests (numeric updates)
source/api_c/tests/*, source/api_cc/tests/*, source/lmp/tests/*
Updated expected numeric golden outputs across multiple test suites to match new spin-enabled model exports and pt_expt backend behavior.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant DeepEval as DeepEval (pt_expt)
    participant Deserial as Deserializer
    participant SpinModel as SpinModel
    participant ModelDef as ModelOutputDef
    participant EvalFlow as _eval_model_spin

    User->>DeepEval: init(model_path)
    DeepEval->>Deserial: load data["model"]
    alt model.type == "spin_ener"
        Deserial->>SpinModel: SpinModel.deserialize(...)
        SpinModel-->>DeepEval: spin-capable model + spin metadata
        DeepEval->>ModelDef: build spin-aware ModelOutputDef
    else
        Deserial->>DeepEval: BaseModel.deserialize(...)
        DeepEval->>ModelDef: use atomic_output_def()
    end

    User->>DeepEval: eval(coords, spin=..., ...)
    alt spin provided
        DeepEval->>EvalFlow: _eval_model_spin(...)
        EvalFlow->>EvalFlow: extend spin to ghost atoms (mapping)
        EvalFlow->>EvalFlow: call exported model with ext_spin
        EvalFlow->>EvalFlow: communicate_extended_output()
        EvalFlow-->>DeepEval: energy/force/force_mag/virial
    else
        DeepEval->>DeepEval: _eval_model() (standard)
    end
    DeepEval-->>User: outputs
Loading
sequenceDiagram
    participant User
    participant DeepSpinPTExpt as DeepSpinPTExpt (C++)
    participant Metadata as ZIP/JSON Reader
    participant NeighborList as buildTypeSortedNlist
    participant TorchModel as AOTIModel
    participant Extractor as OutputExtractor

    User->>DeepSpinPTExpt: init(model_pt2)
    DeepSpinPTExpt->>Metadata: read extra/metadata.json
    Metadata-->>DeepSpinPTExpt: rcut, type_map, use_spin, dims
    User->>DeepSpinPTExpt: compute(coord, spin, atype, ...)
    alt LAMMPS nlist provided
        DeepSpinPTExpt->>NeighborList: build type-sorted nlist
    else standalone
        DeepSpinPTExpt->>NeighborList: extend coords & build nlist
    end
    DeepSpinPTExpt->>TorchModel: run_model(coord_tensor, ext_spin, nlist, ...)
    TorchModel-->>DeepSpinPTExpt: flat output tensors
    DeepSpinPTExpt->>Extractor: extract_outputs(named)
    DeepSpinPTExpt->>DeepSpinPTExpt: fold extended->local, fold magnetic derivatives
    DeepSpinPTExpt-->>User: energy, force, force_mag, virial
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested labels

bug

Suggested reviewers

  • njzjz
  • iProzd
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 38.51% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: adding DeepSpin support to the pt_expt backend.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
deepmd/dpmodel/model/spin_model.py (1)

645-654: ⚠️ Potential issue | 🟠 Major

Expose magnetic atomic virials in the translated spin API.

These branches now populate *_derv_c_mag, but call(), call_lower(), and translated_output_def() still drop it. The raw path can compute magnetic atomic virials now, but the translated/user-facing API still cannot return them.

💡 Suggested fix
         if (
             self.backbone_model.do_grad_c(var_name)
             and model_ret.get(f"{var_name}_derv_c_redu") is not None
         ):
             model_predict["virial"] = model_ret[f"{var_name}_derv_c_redu"].squeeze(-2)
             if do_atomic_virial and model_ret.get(f"{var_name}_derv_c") is not None:
                 model_predict["atom_virial"] = model_ret[f"{var_name}_derv_c"].squeeze(
                     -2
                 )
+            if do_atomic_virial and model_ret.get(f"{var_name}_derv_c_mag") is not None:
+                model_predict["atom_virial_mag"] = model_ret[
+                    f"{var_name}_derv_c_mag"
+                ].squeeze(-2)
@@
         if (
             self.backbone_model.do_grad_c(var_name)
             and model_ret.get(f"{var_name}_derv_c_redu") is not None
         ):
             model_predict["virial"] = model_ret[f"{var_name}_derv_c_redu"].squeeze(-2)
             if do_atomic_virial and model_ret.get(f"{var_name}_derv_c") is not None:
                 model_predict["extended_virial"] = model_ret[
                     f"{var_name}_derv_c"
                 ].squeeze(-2)
+            if do_atomic_virial and model_ret.get(f"{var_name}_derv_c_mag") is not None:
+                model_predict["extended_virial_mag"] = model_ret[
+                    f"{var_name}_derv_c_mag"
+                ].squeeze(-2)
@@
         if self.backbone_model.do_grad_c(var_name):
             output_def["virial"] = deepcopy(out_def_data[f"{var_name}_derv_c_redu"])
             output_def["virial"].squeeze(-2)
             output_def["atom_virial"] = deepcopy(out_def_data[f"{var_name}_derv_c"])
             output_def["atom_virial"].squeeze(-2)
+            output_def["atom_virial_mag"] = deepcopy(
+                out_def_data[f"{var_name}_derv_c_mag"]
+            )
+            output_def["atom_virial_mag"].squeeze(-2)
         return output_def

Also applies to: 821-831

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/dpmodel/model/spin_model.py` around lines 645 - 654, The translated
spin API currently drops the magnetic atomic virials produced by
process_spin_output (fields named "{var_name}_derv_c_mag"); update call(),
call_lower(), and translated_output_def() to include and propagate these
"{var_name}_derv_c_mag" entries alongside the existing "{var_name}_derv_c" and
"{var_name}_derv_c_mag" mask handling so the user-facing outputs return magnetic
atomic virials; locate where translated_output_def builds the output spec and
where call()/call_lower() construct the returned dict and add the
"{var_name}_derv_c_mag" keys (and any corresponding masks) for the same var_name
cases already handling non-magnetic virials.
🧹 Nitpick comments (5)
source/tests/infer/deeppot_dpa_spin.yaml (1)

2390-2390: Drop the wall-clock timestamp from the canonical fixture.

Committing time makes this YAML change on every regeneration even when the model is identical, which undercuts the goal of a canonical source artifact. Normalizing or stripping it in the generator will keep backend-regeneration diffs focused on real model changes.

💡 Suggested change
- time: "2026-04-04 10:38:37.220790+00:00"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/infer/deeppot_dpa_spin.yaml` at line 2390, The canonical fixture
currently includes a wall-clock "time" field in
source/tests/infer/deeppot_dpa_spin.yaml which causes noisy diffs; update the
fixture generation/serialization step that writes canonical YAML to either omit
the "time" key or normalize it to a fixed placeholder value (e.g., null or a
constant) before writing, and adjust any code that writes/validates fixtures so
generate_canonical_fixture / the YAML serializer drops the "time" field for
canonical outputs; ensure tests that consume the fixture are updated to not
expect a real timestamp.
deepmd/pt_expt/infer/deep_eval.py (1)

128-139: Prefer the explicit spin_ener discriminator here.

deepmd/pt_expt/utils/serialization.py already writes model.type == "spin_ener" to make backend dispatch deterministic. Re-inferring spin from spin/backbone_model creates a second schema to keep in sync.

♻️ Minimal fix
-        if "spin" in model_data and "backbone_model" in model_data:
+        is_spin = model_data.get("type") == "spin_ener" or (
+            "spin" in model_data and "backbone_model" in model_data
+        )
+        if is_spin:
             from deepmd.pt_expt.model.spin_model import (
                 SpinModel,
             )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pt_expt/infer/deep_eval.py` around lines 128 - 139, The code infers
spin models by checking keys in model_data instead of using the explicit
discriminator written during serialization; change the branch in deep_eval.py to
inspect model_dict["model"].get("type") (or similar) for the value "spin_ener"
and use that to decide between SpinModel.deserialize(...) and
BaseModel.deserialize(...), setting self._dpmodel and self._is_spin accordingly
(keep references to model_data, SpinModel.deserialize, BaseModel.deserialize,
self._dpmodel, and self._is_spin to locate the change).
source/api_cc/src/DeepSpinPTExpt.cc (1)

714-719: Minor: Trailing space in type_map string.

The get_type_map function appends a space after each type including the last one, resulting in a trailing space. This is likely harmless but inconsistent with typical formatting.

♻️ Suggested fix
 void DeepSpinPTExpt::get_type_map(std::string& type_map_str) {
-  for (const auto& t : type_map) {
-    type_map_str += t;
-    type_map_str += " ";
+  for (size_t i = 0; i < type_map.size(); ++i) {
+    if (i > 0) {
+      type_map_str += " ";
+    }
+    type_map_str += type_map[i];
   }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/api_cc/src/DeepSpinPTExpt.cc` around lines 714 - 719, The get_type_map
function (DeepSpinPTExpt::get_type_map) appends a space after every element in
type_map which produces a trailing space; change the logic to only insert a
separator between elements (e.g., loop and append the first element without a
leading space then prepend a space before subsequent elements, or build the
string and remove the final separator) so type_map_str contains types separated
by single spaces with no trailing space.
source/lmp/tests/test_lammps_spin_pt2.py (1)

145-166: Consider simplifying unit validation logic.

The function has three separate if units == "metal" ... else raise ValueError blocks, but units defaults to "metal" and no test calls it with a different value. Consider consolidating to a single validation at the start.

♻️ Suggested simplification
 def _lammps(data_file, units="metal") -> PyLammps:
+    if units != "metal":
+        raise ValueError("units for spin should be metal")
     lammps = PyLammps()
     lammps.units(units)
     lammps.boundary("p p p")
     lammps.atom_style("spin")
-    if units == "metal":
-        lammps.neighbor("2.0 bin")
-    else:
-        raise ValueError("units for spin should be metal")
+    lammps.neighbor("2.0 bin")
     lammps.neigh_modify("every 10 delay 0 check no")
     lammps.read_data(data_file.resolve())
-    if units == "metal":
-        lammps.mass("1 58")
-        lammps.mass("2 16")
-    else:
-        raise ValueError("units for spin should be metal")
-    if units == "metal":
-        lammps.timestep(0.0005)
-    else:
-        raise ValueError("units for spin should be metal")
+    lammps.mass("1 58")
+    lammps.mass("2 16")
+    lammps.timestep(0.0005)
     lammps.fix("1 all nve")
     return lammps
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/lmp/tests/test_lammps_spin_pt2.py` around lines 145 - 166, The _lammps
function repeats the same units check three times; consolidate by validating
units once at the top of _lammps (raise ValueError if units != "metal") and
remove the subsequent redundant if/else blocks, keeping the existing calls to
PyLammps methods (units(), neighbor(), mass(), timestep(), neigh_modify(),
read_data(), fix()) and their current arguments unchanged.
source/api_cc/tests/test_deepspin_model_devi_ptexpt.cc (1)

14-16: Consider documenting the rationale for relaxed EPSILON.

The EPSILON redefinition is significantly more relaxed than typical numerical precision (1e-1 for float). While spin models may need relaxed tolerances, a brief comment explaining why would help future maintainers understand this isn't accidental.

📝 Suggested documentation
-// Spin models need relaxed epsilon
+// Spin models need relaxed epsilon: the magnetic derivative computations
+// involve additional numerical operations that accumulate floating-point
+// differences, especially for single-precision.
 `#undef` EPSILON
 `#define` EPSILON (std::is_same<VALUETYPE, double>::value ? 1e-6 : 1e-1)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/api_cc/tests/test_deepspin_model_devi_ptexpt.cc` around lines 14 - 16,
The EPSILON macro is redefined to be much larger for float (1e-1) and should be
accompanied by a short clarifying comment; update the code around the EPSILON
redefinition (the `#undef` EPSILON / `#define` EPSILON lines referencing VALUETYPE)
to add a brief comment explaining that spin model tests use relaxed tolerances
because iterative solvers/physical degeneracies lead to larger numerical
deviations in float precision, and note that the value was chosen to avoid
spurious failures while preserving test intent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@deepmd/pd/infer/deep_eval.py`:
- Around line 300-305: The handler for Paddle's has_spin currently leaves
self._has_spin as a callable causing get_use_spin to always return [] for
callable has_spin; update the initialization so that when self._has_spin is
callable you invoke it (i.e. set self._has_spin = self._has_spin()) to mirror
the PyTorch backend, ensuring get_use_spin() reads the boolean value correctly
from the model.spin usage; locate the attribute initialization where
self._has_spin is assigned and replace the assignment when callable to a call of
the function.

In `@deepmd/pt_expt/infer/deep_eval.py`:
- Around line 805-813: In _eval_model_spin(), mirror the pt2 fparam fallback
used in _eval_model: when fparam is None and the model is a .pt2 (or
default_fparam exists in metadata), populate fparam from default_fparam and then
construct fparam_t the same way as in the non-spin path (reshape to (nframes,
self.get_dim_fparam()) and call torch.tensor(..., dtype=torch.float64,
device=DEVICE)); only leave fparam_t as None if neither fparam nor
default_fparam is available. Target symbols: _eval_model_spin, fparam,
default_fparam, fparam_t, self.get_dim_fparam(), DEVICE.
- Around line 326-335: The dispatch currently only checks for kwargs["spin"]
presence; instead validate compatibility between the provided spin and the
loaded model: call the spin-path only if spins is provided AND the spin-capable
evaluator exists (self._eval_model_spin or a boolean like self._model_has_spin),
and call the non-spin path only if spins is not provided AND the non-spin
evaluator exists; if the caller provides spins but the model has no spin
evaluator, raise a clear ValueError mentioning _eval_model_spin, and if the
model requires spins but none were provided, raise a ValueError mentioning
_eval_model; convert spins to np.array only when you will call the spin
evaluator. Ensure error messages reference the relevant symbols
(_eval_model_spin, _eval_model) so the failing case is obvious.

In `@source/api_cc/src/commonPTExpt.h`:
- Around line 55-59: The parse flow allows malformed JSON to slip through;
update parse(), parse_array(), and parse_object() to validate expected tokens
and trailing data: after parse_value() in parse() call skip_ws() and ensure the
next character is EOF (or throw a parse error for trailing bytes); in
parse_array() verify the next non-whitespace character is ']' before consuming
it (and error on mismatch), and likewise in parse_object() verify '}' and ':'
where expected before consuming (error if different). Apply the same
token-verification and trailing-byte checks to the other parsing functions in
the same file/range (lines ~195-243) so malformed inputs like "[1}" or '{"rcut"
6.0}' fail fast with explicit parse errors.
- Around line 345-425: The code returns raw entry bytes without verifying the
ZIP compression method, so compressed entries (method != 0) will be returned
instead of being decompressed; fix by reading the compression method (use
read_u16(pos + 10) from the central-directory entry, or
read_u16(local_header_offset + 8) from the local header as a fallback) and if
the method is not 0 (stored), do not return compressed data—either skip this
entry (continue the loop) or return a clear error/empty result; perform this
check before computing data_offset and calling content.substr(data_offset,
uncompressed_size) and reference the variables/functions compression_method,
read_u16, local_header_offset, data_offset, compressed_size, and
uncompressed_size when making the change.
- Around line 286-303: read_zip_entry currently trusts archive-supplied offsets
and indexes content[] without bounds checks; this can cause OOB reads when
cd_offset, ZIP64 extra-field lengths, or local_header_offset are corrupt. Add a
reusable bounds-check helper (e.g., a lambda like check_range(offset, length)
that validates offset + length <= content.size()) and use it before any use of
cd_offset, before each access inside the extra-field parsing loop (validating
field_data + offset + b reads and the field length), and before reading the
local header (local_header_offset + sizeof(local header) and any subsequent
payload reads). Update read_u16/read_u32 call sites in read_zip_entry to call
the helper first so all array indexing into content is guarded.

In `@source/api_cc/src/DeepSpin.cc`:
- Around line 743-748: DeepSpinModelDevi::get_use_spin currently returns the
flags from dps[0] which makes behavior order-dependent; change it to iterate
over dps, pick the first non-empty std::vector<bool> returned by each model's
get_use_spin(), and if you encounter another non-empty vector that disagrees
(element-wise) with the first non-empty one, fail loudly (e.g., throw a
std::runtime_error or use the project's error reporting). If no model returns a
non-empty vector return an empty vector. Use the existing symbols
(DeepSpinModelDevi::get_use_spin and dps[i]->get_use_spin()) to locate and
update the logic.

In `@source/api_cc/src/DeepSpinPTExpt.cc`:
- Around line 663-671: The code is reading per-atom energies from the wrong
output key; when atomic is true, change the tensor source used to build
atom_energy_tensor from output_map["energy"] to output_map["atom_energy"] so you
extract per-atom values rather than the total reduced energy; update the
creation of atom_energy_tensor (the variable atom_energy_tensor used with
cpu_atom_energy_ and atom_energy.assign) to call
output_map["atom_energy"].view({-1}).to(floatType) and leave the remaining
conversion/assign logic unchanged.

In `@source/api_cc/tests/test_deeppot_dpa_ptexpt_spin.cc`:
- Around line 46-57: The test currently only checks BUILD_PYTORCH before calling
dp.init(model_path) but the .pt2/DeepSpinPTExpt implementation also requires
BUILD_PT_EXPT_SPIN; update both skip guards around the .pt2 model checks to use
a combined condition (if either PyTorch support is missing or pt_expt spin
support is missing) and call GTEST_SKIP with a message like "Skip: PyTorch or
pt_expt spin support is not enabled."; locate the two guarded blocks that
precede dp.init(model_path) / the .pt2 model usage and replace the existing
`#ifndef` BUILD_PYTORCH ... GTEST_SKIP with the suggested `#if`
!defined(BUILD_PYTORCH) || !BUILD_PT_EXPT_SPIN check so the test is skipped
unless both BUILD_PYTORCH and BUILD_PT_EXPT_SPIN are enabled (references:
dp.init, DeepSpinPTExpt).

In `@source/lmp/tests/test_lammps_spin_nopbc_pt2.py`:
- Around line 22-23: Replace the hard-coded module-level Path("data.lmp") and
Path("md.out") with per-test temporary files and ensure teardown_module also
removes the md file; specifically, change uses of the variables data_file and
md_file in this test module (and the setup/teardown logic referenced by
teardown_module) to create unique paths via pytest's tmp_path fixture (or
tempfile.NamedTemporaryFile/TemporaryDirectory) within each test function so
parallel/interrupted runs don't clobber each other, and update teardown_module()
to clean up both data_file and md_file (or rely on tmp_path auto-cleanup)
accordingly.

In `@source/lmp/tests/test_lammps_spin_pt2.py`:
- Around line 141-142: teardown_module currently only removes data_file, leaving
the model deviation output file (md_file) behind; update teardown_module to also
remove md_file if it exists by checking for its existence and calling os.remove
on md_file (handle potential exceptions or use os.path.exists) so both data_file
and md_file are cleaned up; refer to the teardown_module function and the
data_file and md_file variables when making the change.

In `@source/tests/infer/gen_spin_model_devi.py`:
- Around line 226-228: The dp.eval calls unpack unused return values (v, ae, av)
causing Ruff's unused-unpacked-variable error; change the unpacking in the
LAMMPS reference loops so the unused outputs are named with underscores (e.g.,
e, f, _, ae, av, fm, _ -> e, f, _, _, _, fm, _ or explicitly replace v, ae, av
with _), apply the same change to the other dp.eval call around lines 244-246,
and then run ruff check . and ruff format . before committing.

In `@source/tests/infer/gen_spin.py`:
- Around line 189-191: The unpack of dp_pth.eval currently binds v_pth, ae_pth,
and av_pth which are never used; update the unpack to drop those outputs by
replacing them with underscores (or a single underscore-per-unused) so only the
used symbols (e_pth, f_pth, fm_pth, and the trailing throwaway) are assigned;
locate the call to dp_pth.eval in the comparison block (the line assigning
e_pth, f_pth, v_pth, ae_pth, av_pth, fm_pth, _) and remove or rename the unused
variables to underscores to satisfy Ruff.

In `@source/tests/pt_expt/infer/test_deep_eval_spin.py`:
- Line 253: The unpack of dp.eval(COORD, BOX, ATYPE, atomic=False, spin=SPIN)
assigns e, f, v, fm, mm but mm is unused (same for the NoPBC unpacks at the
other locations); update those unpacks to either name the unused value with a
leading underscore (e.g., e, f, v, fm, _mm or e, f, v, fm, _) or add an explicit
assertion/unused guard (e.g., assert mm is None) so Ruff RUF059 is not
triggered; locate the calls by the function/method dp.eval and the variables
COORD, BOX, ATYPE, SPIN to apply the change consistently at lines where mm (and
corresponding NoPBC tuple members) are currently ignored.

---

Outside diff comments:
In `@deepmd/dpmodel/model/spin_model.py`:
- Around line 645-654: The translated spin API currently drops the magnetic
atomic virials produced by process_spin_output (fields named
"{var_name}_derv_c_mag"); update call(), call_lower(), and
translated_output_def() to include and propagate these "{var_name}_derv_c_mag"
entries alongside the existing "{var_name}_derv_c" and "{var_name}_derv_c_mag"
mask handling so the user-facing outputs return magnetic atomic virials; locate
where translated_output_def builds the output spec and where call()/call_lower()
construct the returned dict and add the "{var_name}_derv_c_mag" keys (and any
corresponding masks) for the same var_name cases already handling non-magnetic
virials.

---

Nitpick comments:
In `@deepmd/pt_expt/infer/deep_eval.py`:
- Around line 128-139: The code infers spin models by checking keys in
model_data instead of using the explicit discriminator written during
serialization; change the branch in deep_eval.py to inspect
model_dict["model"].get("type") (or similar) for the value "spin_ener" and use
that to decide between SpinModel.deserialize(...) and
BaseModel.deserialize(...), setting self._dpmodel and self._is_spin accordingly
(keep references to model_data, SpinModel.deserialize, BaseModel.deserialize,
self._dpmodel, and self._is_spin to locate the change).

In `@source/api_cc/src/DeepSpinPTExpt.cc`:
- Around line 714-719: The get_type_map function (DeepSpinPTExpt::get_type_map)
appends a space after every element in type_map which produces a trailing space;
change the logic to only insert a separator between elements (e.g., loop and
append the first element without a leading space then prepend a space before
subsequent elements, or build the string and remove the final separator) so
type_map_str contains types separated by single spaces with no trailing space.

In `@source/api_cc/tests/test_deepspin_model_devi_ptexpt.cc`:
- Around line 14-16: The EPSILON macro is redefined to be much larger for float
(1e-1) and should be accompanied by a short clarifying comment; update the code
around the EPSILON redefinition (the `#undef` EPSILON / `#define` EPSILON lines
referencing VALUETYPE) to add a brief comment explaining that spin model tests
use relaxed tolerances because iterative solvers/physical degeneracies lead to
larger numerical deviations in float precision, and note that the value was
chosen to avoid spurious failures while preserving test intent.

In `@source/lmp/tests/test_lammps_spin_pt2.py`:
- Around line 145-166: The _lammps function repeats the same units check three
times; consolidate by validating units once at the top of _lammps (raise
ValueError if units != "metal") and remove the subsequent redundant if/else
blocks, keeping the existing calls to PyLammps methods (units(), neighbor(),
mass(), timestep(), neigh_modify(), read_data(), fix()) and their current
arguments unchanged.

In `@source/tests/infer/deeppot_dpa_spin.yaml`:
- Line 2390: The canonical fixture currently includes a wall-clock "time" field
in source/tests/infer/deeppot_dpa_spin.yaml which causes noisy diffs; update the
fixture generation/serialization step that writes canonical YAML to either omit
the "time" key or normalize it to a fixed placeholder value (e.g., null or a
constant) before writing, and adjust any code that writes/validates fixtures so
generate_canonical_fixture / the YAML serializer drops the "time" field for
canonical outputs; ensure tests that consume the fixture are updated to not
expect a real timestamp.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: f096bea5-ecc4-48da-bcbc-0171e426c920

📥 Commits

Reviewing files that changed from the base of the PR and between b02fd91 and 3c5a29f.

📒 Files selected for processing (38)
  • deepmd/dpmodel/infer/deep_eval.py
  • deepmd/dpmodel/model/base_model.py
  • deepmd/dpmodel/model/spin_model.py
  • deepmd/dpmodel/model/transform_output.py
  • deepmd/infer/deep_eval.py
  • deepmd/pd/infer/deep_eval.py
  • deepmd/pt/infer/deep_eval.py
  • deepmd/pt/model/model/spin_model.py
  • deepmd/pt/utils/serialization.py
  • deepmd/pt_expt/infer/deep_eval.py
  • deepmd/pt_expt/model/spin_model.py
  • deepmd/pt_expt/utils/serialization.py
  • source/api_c/tests/test_deepspin_a.cc
  • source/api_c/tests/test_deepspin_a_hpp.cc
  • source/api_cc/include/DeepSpin.h
  • source/api_cc/include/DeepSpinPT.h
  • source/api_cc/include/DeepSpinPTExpt.h
  • source/api_cc/src/DeepPotPTExpt.cc
  • source/api_cc/src/DeepSpin.cc
  • source/api_cc/src/DeepSpinPTExpt.cc
  • source/api_cc/src/commonPTExpt.h
  • source/api_cc/tests/test_deeppot_dpa_pt_spin.cc
  • source/api_cc/tests/test_deeppot_dpa_ptexpt_spin.cc
  • source/api_cc/tests/test_deepspin_model_devi_ptexpt.cc
  • source/install/test_cc_local.sh
  • source/lmp/tests/test_lammps_spin_nopbc_pt.py
  • source/lmp/tests/test_lammps_spin_nopbc_pt2.py
  • source/lmp/tests/test_lammps_spin_pt.py
  • source/lmp/tests/test_lammps_spin_pt2.py
  • source/tests/infer/deeppot_dpa_spin.pth
  • source/tests/infer/deeppot_dpa_spin.yaml
  • source/tests/infer/deeppot_dpa_spin_md0.yaml
  • source/tests/infer/deeppot_dpa_spin_md1.yaml
  • source/tests/infer/gen_common.py
  • source/tests/infer/gen_spin.py
  • source/tests/infer/gen_spin_model_devi.py
  • source/tests/pt_expt/infer/test_deep_eval.py
  • source/tests/pt_expt/infer/test_deep_eval_spin.py

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 4, 2026

Codecov Report

❌ Patch coverage is 67.27853% with 410 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.22%. Comparing base (b02fd91) to head (34d4935).

Files with missing lines Patch % Lines
source/api_cc/src/commonPTExpt.h 51.75% 113 Missing and 38 partials ⚠️
source/api_cc/src/DeepSpinPTExpt.cc 72.45% 85 Missing and 26 partials ⚠️
...ource/api_cc/tests/test_deeppot_dpa_ptexpt_spin.cc 68.16% 69 Missing and 2 partials ⚠️
...ce/api_cc/tests/test_deepspin_model_devi_ptexpt.cc 60.39% 38 Missing and 2 partials ⚠️
deepmd/pt_expt/infer/deep_eval.py 88.88% 9 Missing ⚠️
source/api_cc/src/DeepSpin.cc 25.00% 7 Missing and 2 partials ⚠️
deepmd/pd/infer/deep_eval.py 20.00% 4 Missing ⚠️
deepmd/pt/infer/deep_eval.py 20.00% 4 Missing ⚠️
deepmd/dpmodel/infer/deep_eval.py 50.00% 3 Missing ⚠️
deepmd/dpmodel/model/base_model.py 33.33% 2 Missing ⚠️
... and 5 more
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5370      +/-   ##
==========================================
- Coverage   82.37%   82.22%   -0.15%     
==========================================
  Files         812      817       +5     
  Lines       83560    84479     +919     
  Branches     4091     4180      +89     
==========================================
+ Hits        68832    69467     +635     
- Misses      13507    13736     +229     
- Partials     1221     1276      +55     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Update gen_spin.py and gen_spin_model_devi.py to produce models with
type_map=["Ni", "O", "H"] and use_spin=[True, False, False], adding a
dummy unused type to verify correct handling. Regenerate all .yaml
canonical weights and update reference values in C/C++/LAMMPS tests.

Add rel=1e-4 tolerance doc for md.out model devi assertions (LAMMPS
writes ~6 significant digits in default scientific format).
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (4)
source/tests/infer/gen_spin_model_devi.py (1)

226-228: ⚠️ Potential issue | 🟡 Minor

Underscore the unused eval() outputs in both LAMMPS reference loops.

v, ae, and av are never read in either branch, so Ruff will flag both unpacks.

♻️ Minimal fix
-        e, f, v, ae, av, fm, _ = dp.eval(
+        e, f, _v, _ae, _av, fm, _ = dp.eval(
             lmp_coord, lmp_box, lmp_atype, atomic=True, spin=lmp_spin
         )
@@
-        e, f, v, ae, av, fm, _ = dp.eval(
+        e, f, _v, _ae, _av, fm, _ = dp.eval(
             lmp_coord, None, lmp_atype, atomic=True, spin=lmp_spin
         )

As per coding guidelines, **/*.py: Always run ruff check . and ruff format . before committing changes or CI will fail.

Also applies to: 244-246

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/infer/gen_spin_model_devi.py` around lines 226 - 228, The
unpacking from dp.eval in the LAMMPS reference loops currently binds unused
names (v, ae, av) which triggers Ruff; change the unpack patterns in both places
where dp.eval is called (the first call currently assigning e, f, v, ae, av, fm,
_ and the second call at the other branch) to use underscores for unused outputs
(e.g., e, f, _, _, _, fm, _) so only used values are named and Ruff warnings go
away.
source/lmp/tests/test_lammps_spin_nopbc_pt2.py (1)

22-23: ⚠️ Potential issue | 🟠 Major

Use unique temp paths for data.lmp and md.out.

These globals write into the shared source/lmp/tests directory, and teardown_module() only removes data_file. With source/lmp/tests/test_lammps_spin_pt2.py using the same names, parallel runs or interrupted reruns can clobber the input file or leave a stale md.out for the model-deviation assertions to read back. Please move both files under a per-test temp directory and pass those paths through the fixture instead of keeping them at module scope.

Also applies to: 91-92

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/lmp/tests/test_lammps_spin_nopbc_pt2.py` around lines 22 - 23, The
module-level globals data_file and md_file write into the shared tests directory
and can clash between tests; refactor so both are created inside a per-test
temporary directory and their paths are provided via the test fixture instead of
module scope. Concretely, move creation of data_file and md_file into the
fixture used by tests in this file (and the other duplicate occurrences around
the second block), return both Path objects from that fixture, update test
signatures to accept the fixture value rather than using the globals, and ensure
teardown_module() (or the fixture finalizer) removes both files to avoid stale
md.out or data.lmp across parallel or interrupted runs.
source/lmp/tests/test_lammps_spin_pt2.py (1)

23-24: ⚠️ Potential issue | 🟠 Major

Isolate all generated LAMMPS artifacts per test.

data.lmp, md.out, and the virial dump file are fixed names in the shared test directory, while teardown_module() only removes data_file. That now races directly with source/lmp/tests/test_lammps_spin_nopbc_pt2.py and can also leave stale outputs behind for subsequent reruns. Please create these files under a per-test temp directory and thread the paths through the fixture.

Also applies to: 141-142, 201-203

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/lmp/tests/test_lammps_spin_pt2.py` around lines 23 - 24, Tests use
shared filenames (data_file, md_file and the virial dump) in
source/lmp/tests/test_lammps_spin_pt2.py which races with other tests and leaves
stale outputs; change the tests to create a unique per-test temp directory (use
pytest tmp_path or a fixture) and construct data_file, md_file and the virial
dump path under that directory, thread these paths through the relevant
fixture(s) and the tests (functions that reference
data_file/md_file/teardown_module()), and update teardown_module() to remove all
generated artifacts in that temp dir; apply the same change to the other
occurrences noted around the blocks referencing data_file/md_file (the sections
corresponding to lines ~141-142 and ~201-203).
source/tests/infer/gen_spin.py (1)

189-195: ⚠️ Potential issue | 🟡 Minor

Compare the .pth virial outputs here, or stop unpacking them.

This verification block never reads v_pth, ae_pth, or av_pth, so Ruff will still flag the unpack. Since this PR’s backend-alignment change is on atomic virials, I’d prefer comparing those arrays here; otherwise rename them to dummy targets.

♻️ Lint-only fallback
-        e_pth, f_pth, v_pth, ae_pth, av_pth, fm_pth, _ = dp_pth.eval(
+        e_pth, f_pth, _v_pth, _ae_pth, _av_pth, fm_pth, _ = dp_pth.eval(
             coord, box, atype, atomic=True, spin=spin
         )

As per coding guidelines, **/*.py: Always run ruff check . and ruff format . before committing changes or CI will fail.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/infer/gen_spin.py` around lines 189 - 195, The unpacked outputs
from dp_pth.eval (v_pth, ae_pth, av_pth) are not used and trigger lint errors;
update the verification to actually compare the atomic virials returned by
dp_pth.eval (av_pth) against the test reference (e.g., av1) by adding a print of
the max absolute diff (similar to the existing energy/force checks) so av_pth is
consumed, or if you don't want to compare virials rename v_pth/ae_pth/av_pth to
dummy names (_) when calling dp_pth.eval; locate the call to dp_pth.eval and the
nearby prints (e_pth, f_pth, v_pth, ae_pth, av_pth, fm_pth, _ =
dp_pth.eval(...)) and either add the av_pth vs av1 comparison print or change
v_pth/ae_pth/av_pth to _ to satisfy the linter.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@source/lmp/tests/test_lammps_spin_nopbc_pt2.py`:
- Around line 22-23: The module-level globals data_file and md_file write into
the shared tests directory and can clash between tests; refactor so both are
created inside a per-test temporary directory and their paths are provided via
the test fixture instead of module scope. Concretely, move creation of data_file
and md_file into the fixture used by tests in this file (and the other duplicate
occurrences around the second block), return both Path objects from that
fixture, update test signatures to accept the fixture value rather than using
the globals, and ensure teardown_module() (or the fixture finalizer) removes
both files to avoid stale md.out or data.lmp across parallel or interrupted
runs.

In `@source/lmp/tests/test_lammps_spin_pt2.py`:
- Around line 23-24: Tests use shared filenames (data_file, md_file and the
virial dump) in source/lmp/tests/test_lammps_spin_pt2.py which races with other
tests and leaves stale outputs; change the tests to create a unique per-test
temp directory (use pytest tmp_path or a fixture) and construct data_file,
md_file and the virial dump path under that directory, thread these paths
through the relevant fixture(s) and the tests (functions that reference
data_file/md_file/teardown_module()), and update teardown_module() to remove all
generated artifacts in that temp dir; apply the same change to the other
occurrences noted around the blocks referencing data_file/md_file (the sections
corresponding to lines ~141-142 and ~201-203).

In `@source/tests/infer/gen_spin_model_devi.py`:
- Around line 226-228: The unpacking from dp.eval in the LAMMPS reference loops
currently binds unused names (v, ae, av) which triggers Ruff; change the unpack
patterns in both places where dp.eval is called (the first call currently
assigning e, f, v, ae, av, fm, _ and the second call at the other branch) to use
underscores for unused outputs (e.g., e, f, _, _, _, fm, _) so only used values
are named and Ruff warnings go away.

In `@source/tests/infer/gen_spin.py`:
- Around line 189-195: The unpacked outputs from dp_pth.eval (v_pth, ae_pth,
av_pth) are not used and trigger lint errors; update the verification to
actually compare the atomic virials returned by dp_pth.eval (av_pth) against the
test reference (e.g., av1) by adding a print of the max absolute diff (similar
to the existing energy/force checks) so av_pth is consumed, or if you don't want
to compare virials rename v_pth/ae_pth/av_pth to dummy names (_) when calling
dp_pth.eval; locate the call to dp_pth.eval and the nearby prints (e_pth, f_pth,
v_pth, ae_pth, av_pth, fm_pth, _ = dp_pth.eval(...)) and either add the av_pth
vs av1 comparison print or change v_pth/ae_pth/av_pth to _ to satisfy the
linter.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: a508a26a-f360-44cb-a4f6-af3f3b3eab2a

📥 Commits

Reviewing files that changed from the base of the PR and between 3c5a29f and b7eb496.

📒 Files selected for processing (13)
  • source/api_c/tests/test_deepspin_a.cc
  • source/api_c/tests/test_deepspin_a_hpp.cc
  • source/api_cc/tests/test_deeppot_dpa_pt_spin.cc
  • source/api_cc/tests/test_deeppot_dpa_ptexpt_spin.cc
  • source/lmp/tests/test_lammps_spin_nopbc_pt.py
  • source/lmp/tests/test_lammps_spin_nopbc_pt2.py
  • source/lmp/tests/test_lammps_spin_pt.py
  • source/lmp/tests/test_lammps_spin_pt2.py
  • source/tests/infer/deeppot_dpa_spin.yaml
  • source/tests/infer/deeppot_dpa_spin_md0.yaml
  • source/tests/infer/deeppot_dpa_spin_md1.yaml
  • source/tests/infer/gen_spin.py
  • source/tests/infer/gen_spin_model_devi.py
✅ Files skipped from review due to trivial changes (3)
  • source/tests/infer/deeppot_dpa_spin.yaml
  • source/api_cc/tests/test_deeppot_dpa_ptexpt_spin.cc
  • source/tests/infer/deeppot_dpa_spin_md0.yaml
🚧 Files skipped from review as they are similar to previous changes (4)
  • source/api_cc/tests/test_deeppot_dpa_pt_spin.cc
  • source/api_c/tests/test_deepspin_a_hpp.cc
  • source/lmp/tests/test_lammps_spin_nopbc_pt.py
  • source/lmp/tests/test_lammps_spin_pt.py

Han Wang added 2 commits April 5, 2026 00:26
…Expt

Add coordinate shift in the NoPBC code path of DeepSpinPTExpt::compute,
matching the existing logic in DeepPotPTExpt. Without the shift,
negative or out-of-box coordinates could produce incorrect neighbor
lists via copy_coord.
Both _eval_model() and _eval_model_spin() guarded the default_fparam
fallback with self._is_pt2, but .pte models also export fparam as a
required positional input when dim_fparam > 0. Passing None crashes
with "NoneType has no attribute size". Remove the _is_pt2 guard so
the fallback applies to both .pt2 and .pte.

Add TestSpinDefaultFparam test covering both formats.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
source/tests/pt_expt/infer/test_deep_eval_spin.py (1)

253-253: ⚠️ Potential issue | 🟡 Minor

Rename or assert the unused unpack targets.

These tuples still leave mm, v, av, mm_no, and mm_ex unused, so Ruff will raise RUF059. Prefix them with _ or add assertions if you want to keep covering those outputs.

Suggested cleanup
-        e, f, v, fm, mm = dp.eval(COORD, BOX, ATYPE, atomic=False, spin=SPIN)
+        e, f, v, fm, _mm = dp.eval(COORD, BOX, ATYPE, atomic=False, spin=SPIN)
@@
-        e, f, v, ae, av, fm, mm = dp.eval(COORD, None, ATYPE, atomic=True, spin=SPIN)
+        e, f, _v, ae, _av, fm, _mm = dp.eval(
+            COORD, None, ATYPE, atomic=True, spin=SPIN
+        )
@@
-        e, f, v, fm, mm = dp.eval(COORD, None, ATYPE, atomic=False, spin=SPIN)
+        e, f, _v, fm, _mm = dp.eval(COORD, None, ATYPE, atomic=False, spin=SPIN)
@@
-        e_no, f_no, v_no, fm_no, mm_no = dp.eval(
+        e_no, f_no, v_no, fm_no, _mm_no = dp.eval(
             COORD, BOX, ATYPE, atomic=False, spin=SPIN
         )
@@
-        e_ex, f_ex, v_ex, fm_ex, mm_ex = dp.eval(
+        e_ex, f_ex, v_ex, fm_ex, _mm_ex = dp.eval(
             COORD, BOX, ATYPE, atomic=False, spin=SPIN, fparam=[0.5]
         )

As per coding guidelines, **/*.py: Always run ruff check . and ruff format . before committing changes or CI will fail.

Also applies to: 280-280, 310-310, 372-376

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/pt_expt/infer/test_deep_eval_spin.py` at line 253, The unpacking
from dp.eval (e.g., the call where you assign e, f, v, fm, mm = dp.eval(COORD,
BOX, ATYPE, atomic=False, spin=SPIN)) leaves several targets unused which trips
RUF059; rename unused variables by prefixing them with an underscore (e.g., _mm,
_v, _fm) or add explicit assertions (assert mm is not None) if you intend to
keep them, and apply the same change to the other dp.eval/unpack sites mentioned
(the similar unpackings around the other occurrences) so unused tuple elements
are either asserted or renamed with a leading underscore.
deepmd/pt_expt/infer/deep_eval.py (1)

326-335: ⚠️ Potential issue | 🟠 Major

Validate spin against the loaded model before dispatch.

Lines 326-335 still route solely on whether kwargs["spin"] is present. A spin model without spin falls into _eval_model(), and a non-spin model with stray spin falls into _eval_model_spin(), so both cases fail later with a low-level runner signature error instead of a clear API error.

Minimal fix
         request_defs = self._get_request_defs(atomic)
         spins = kwargs.get("spin")
+        if self._is_spin and spins is None:
+            raise ValueError("spin input is required for spin models")
+        if not self._is_spin and spins is not None:
+            raise ValueError("spin input is only supported for spin models")
         if spins is not None:
             spins = np.array(spins)
             out = self._eval_func(self._eval_model_spin, numb_test, natoms)(
                 coords, cells, atom_types, spins, fparam, aparam, request_defs
             )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pt_expt/infer/deep_eval.py` around lines 326 - 335, Validate
spin/model compatibility before calling eval: check whether spins =
kwargs.get("spin") is provided and whether the loaded model supports spin by
testing the presence/callability of self._eval_model_spin (and/or
self._eval_model). If spins is not None but self._eval_model_spin is missing or
not callable, raise a clear API error (e.g. ValueError) saying a spin value was
provided for a non-spin model; if spins is None but self._eval_model_spin exists
(model requires spin), raise a clear API error saying spin is required for this
model. After these checks, dispatch to self._eval_model_spin(...) when spins
present and supported, otherwise to self._eval_model(...).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@deepmd/pt_expt/infer/deep_eval.py`:
- Around line 326-335: Validate spin/model compatibility before calling eval:
check whether spins = kwargs.get("spin") is provided and whether the loaded
model supports spin by testing the presence/callability of self._eval_model_spin
(and/or self._eval_model). If spins is not None but self._eval_model_spin is
missing or not callable, raise a clear API error (e.g. ValueError) saying a spin
value was provided for a non-spin model; if spins is None but
self._eval_model_spin exists (model requires spin), raise a clear API error
saying spin is required for this model. After these checks, dispatch to
self._eval_model_spin(...) when spins present and supported, otherwise to
self._eval_model(...).

In `@source/tests/pt_expt/infer/test_deep_eval_spin.py`:
- Line 253: The unpacking from dp.eval (e.g., the call where you assign e, f, v,
fm, mm = dp.eval(COORD, BOX, ATYPE, atomic=False, spin=SPIN)) leaves several
targets unused which trips RUF059; rename unused variables by prefixing them
with an underscore (e.g., _mm, _v, _fm) or add explicit assertions (assert mm is
not None) if you intend to keep them, and apply the same change to the other
dp.eval/unpack sites mentioned (the similar unpackings around the other
occurrences) so unused tuple elements are either asserted or renamed with a
leading underscore.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 01459a32-6738-45e6-8cac-0104127babef

📥 Commits

Reviewing files that changed from the base of the PR and between 703d993 and 6e048ef.

📒 Files selected for processing (2)
  • deepmd/pt_expt/infer/deep_eval.py
  • source/tests/pt_expt/infer/test_deep_eval_spin.py

…tests

- Raise ValueError when spin model is called without spin input or
  non-spin model is called with spin input
- Add test_eval_spin_model_requires_spin for both .pt2/.pte
- Add test_nonspin_model_rejects_spin for .pt2
- Add TestSpinAparam: verify aparam takes effect at eager model level
  (torch.export doesn't support dynamic nframes with aparam yet)
- Add test_pte_eval_default_fparam: extend default_fparam coverage to .pte
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
deepmd/pt_expt/infer/deep_eval.py (1)

756-907: Consider extracting shared logic between _eval_model and _eval_model_spin.

The neighbor list building (lines 778-802), fparam/aparam handling (lines 815-847), and result collection (lines 892-907) are nearly identical to _eval_model. While acceptable for this PR, extracting common helpers (e.g., _build_extended_inputs, _prepare_fparam_aparam, _collect_results) would reduce duplication and maintenance burden.

The virial preservation pattern (lines 879-890) is critical for spin models and correctly saves energy_derv_c_redu before communicate_extended_output to preserve virtual atom contributions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pt_expt/infer/deep_eval.py` around lines 756 - 907, Duplicate logic
between _eval_model and _eval_model_spin should be extracted: create helper
methods (e.g., _build_extended_inputs to handle neighbor-list branching and
return ext_coord_t/ext_atype_t/nlist_t/mapping_t, _prepare_fparam_aparam to
encapsulate fparam/aparam creation and default-fparam handling, and
_collect_results to translate model output into the requested request_defs) and
replace the repeated blocks in both functions; ensure the virial preservation
pattern around communicate_extended_output (saving/restoring
"energy_derv_c_redu") is preserved exactly when refactoring so spin models keep
virtual-atom virial contributions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@deepmd/pt_expt/infer/deep_eval.py`:
- Around line 756-907: Duplicate logic between _eval_model and _eval_model_spin
should be extracted: create helper methods (e.g., _build_extended_inputs to
handle neighbor-list branching and return
ext_coord_t/ext_atype_t/nlist_t/mapping_t, _prepare_fparam_aparam to encapsulate
fparam/aparam creation and default-fparam handling, and _collect_results to
translate model output into the requested request_defs) and replace the repeated
blocks in both functions; ensure the virial preservation pattern around
communicate_extended_output (saving/restoring "energy_derv_c_redu") is preserved
exactly when refactoring so spin models keep virtual-atom virial contributions.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 21531408-29f8-433a-96e3-ca836e9b9ceb

📥 Commits

Reviewing files that changed from the base of the PR and between 6e048ef and 0ff88be.

📒 Files selected for processing (3)
  • deepmd/pt_expt/infer/deep_eval.py
  • source/tests/pt_expt/infer/test_deep_eval_spin.py
  • source/tests/pt_expt/test_dp_freeze.py

Han Wang added 3 commits April 5, 2026 14:06
- Fix torch.export duck-sizing collision: choose sample nframes that
  doesn't match any other tensor dimension, preventing spurious guards
  that lock nframes to a constant
- Add aparam validation in eval: raise ValueError when dim_aparam > 0
  but aparam is not provided (both spin and non-spin paths)
- Upgrade TestSpinAparam to test through .pt2/.pte (was eager-only)
- Add test_eval_without_aparam_raises for both .pt2/.pte
Move default_fparam and aparam tests from test_dp_freeze.py (freeze
entrypoint tests) to test_deep_eval.py (inference tests) where they
belong. Add .pte and .pt2 test classes for both, with consistency
checks against eager model forward and cross-format comparison.
AOTInductor internally creates tensors using PyTorch's default device.
When pt/__init__.py sets torch.set_default_device("cuda:9999999") as a
test sentinel, spin .pt2 compilation fails on CPU-only CI with "Torch
not compiled with CUDA enabled". Clear the default device around
deserialize_to_file calls in spin test fixtures, matching the pattern
already used in non-spin .pt2 tests.
@wanghan-iapcm wanghan-iapcm added the Test CUDA Trigger test CUDA workflow label Apr 5, 2026
@github-actions github-actions bot removed the Test CUDA Trigger test CUDA workflow label Apr 5, 2026
Add md_file cleanup to teardown_module in spin pt2 LAMMPS tests.
Add inline comment documenting PyTorch ZIP STORED assumption.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants