feat(pt_expt): add DeepSpin support for pt_expt backend#5370
feat(pt_expt): add DeepSpin support for pt_expt backend#5370wanghan-iapcm wants to merge 13 commits intodeepmodeling:masterfrom
Conversation
Add end-to-end spin model support for the pt_expt (.pt2/AOTInductor) backend, enabling C++ inference of spin energy models. - Python serialization: handle 7-arg spin models (coord, atype, spin, nlist, mapping, fparam, aparam) in _trace_and_export, strip shape assertions for NoPBC compatibility - Python inference: add _eval_model_spin() in DeepEval, extend communicate_extended_output for magnetic derivatives (force_mag, virial_mag, mask_mag) - C++ backend: create DeepSpinPTExpt class with spin extension for ghost atoms, factory dispatch in DeepSpin.cc - Shared utilities: extract read_zip_entry, JsonParser, buildTypeSortedNlist to commonPTExpt.h - Tests: C++ gtest (PBC/NoPBC, double/float, standalone/LAMMPS nlist), Python pytest (DeepPot API with .pt2/.pte spin models) - Model generator: gen_spin.py for reproducible test model creation
…n tests Add per-type use_spin query to both C++ and Python inference APIs so that consumers can determine which atom types have spin enabled. Also add C++ model-deviation tests for .pt2 spin models and LAMMPS-level spin tests for the .pt2 backend. - Store use_spin in .pt2 metadata and expose via get_use_spin() in C++ (DeepSpin, DeepSpinModelDevi, DeepSpinPTExpt, DeepSpinPT stub) - Add get_use_spin() to Python DeepEvalBackend/DeepEval across all backends (pt_expt, pt, dpmodel, pd) - Create gen_spin_model_devi.py for two-seed spin .pt2 model generation - Add test_deepspin_model_devi_ptexpt.cc C++ model deviation tests - Add test_lammps_spin_pt2.py (PBC) and test_lammps_spin_nopbc_pt2.py - Add use_spin tests for both spin and non-spin models in Python
…spin # Conflicts: # deepmd/pt_expt/infer/deep_eval.py
pt_expt does not support MPI yet, so remove test_pair_deepmd_mpi from both test_lammps_spin_pt2.py and test_lammps_spin_nopbc_pt2.py. Also remove the now-unused imports (importlib, shutil, subprocess, sys, tempfile).
- Add "type": "spin_ener" to spin model serialize() across all backends (dpmodel, pt, pt_expt) so BaseModel.deserialize() can dispatch correctly - Fix dpmodel spin_model add_mag=False -> add_mag=True for atomic virial in both call_common and call_common_lower, matching the pt backend behavior - Handle spin model deserialization in pt's deserialize_to_file and dpmodel's BaseModel.deserialize - Replace committed .pth with canonical .yaml format; gen_spin.py and gen_spin_model_devi.py now regenerate .pth/.pt2 from .yaml via convert_backend - Update all test reference values (C++, C API, LAMMPS) for the regenerated models
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 3c5a29f3dc
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds spin-aware model serialization/deserialization, new spin query APIs ( Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant DeepEval as DeepEval (pt_expt)
participant Deserial as Deserializer
participant SpinModel as SpinModel
participant ModelDef as ModelOutputDef
participant EvalFlow as _eval_model_spin
User->>DeepEval: init(model_path)
DeepEval->>Deserial: load data["model"]
alt model.type == "spin_ener"
Deserial->>SpinModel: SpinModel.deserialize(...)
SpinModel-->>DeepEval: spin-capable model + spin metadata
DeepEval->>ModelDef: build spin-aware ModelOutputDef
else
Deserial->>DeepEval: BaseModel.deserialize(...)
DeepEval->>ModelDef: use atomic_output_def()
end
User->>DeepEval: eval(coords, spin=..., ...)
alt spin provided
DeepEval->>EvalFlow: _eval_model_spin(...)
EvalFlow->>EvalFlow: extend spin to ghost atoms (mapping)
EvalFlow->>EvalFlow: call exported model with ext_spin
EvalFlow->>EvalFlow: communicate_extended_output()
EvalFlow-->>DeepEval: energy/force/force_mag/virial
else
DeepEval->>DeepEval: _eval_model() (standard)
end
DeepEval-->>User: outputs
sequenceDiagram
participant User
participant DeepSpinPTExpt as DeepSpinPTExpt (C++)
participant Metadata as ZIP/JSON Reader
participant NeighborList as buildTypeSortedNlist
participant TorchModel as AOTIModel
participant Extractor as OutputExtractor
User->>DeepSpinPTExpt: init(model_pt2)
DeepSpinPTExpt->>Metadata: read extra/metadata.json
Metadata-->>DeepSpinPTExpt: rcut, type_map, use_spin, dims
User->>DeepSpinPTExpt: compute(coord, spin, atype, ...)
alt LAMMPS nlist provided
DeepSpinPTExpt->>NeighborList: build type-sorted nlist
else standalone
DeepSpinPTExpt->>NeighborList: extend coords & build nlist
end
DeepSpinPTExpt->>TorchModel: run_model(coord_tensor, ext_spin, nlist, ...)
TorchModel-->>DeepSpinPTExpt: flat output tensors
DeepSpinPTExpt->>Extractor: extract_outputs(named)
DeepSpinPTExpt->>DeepSpinPTExpt: fold extended->local, fold magnetic derivatives
DeepSpinPTExpt-->>User: energy, force, force_mag, virial
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
deepmd/dpmodel/model/spin_model.py (1)
645-654:⚠️ Potential issue | 🟠 MajorExpose magnetic atomic virials in the translated spin API.
These branches now populate
*_derv_c_mag, butcall(),call_lower(), andtranslated_output_def()still drop it. The raw path can compute magnetic atomic virials now, but the translated/user-facing API still cannot return them.💡 Suggested fix
if ( self.backbone_model.do_grad_c(var_name) and model_ret.get(f"{var_name}_derv_c_redu") is not None ): model_predict["virial"] = model_ret[f"{var_name}_derv_c_redu"].squeeze(-2) if do_atomic_virial and model_ret.get(f"{var_name}_derv_c") is not None: model_predict["atom_virial"] = model_ret[f"{var_name}_derv_c"].squeeze( -2 ) + if do_atomic_virial and model_ret.get(f"{var_name}_derv_c_mag") is not None: + model_predict["atom_virial_mag"] = model_ret[ + f"{var_name}_derv_c_mag" + ].squeeze(-2) @@ if ( self.backbone_model.do_grad_c(var_name) and model_ret.get(f"{var_name}_derv_c_redu") is not None ): model_predict["virial"] = model_ret[f"{var_name}_derv_c_redu"].squeeze(-2) if do_atomic_virial and model_ret.get(f"{var_name}_derv_c") is not None: model_predict["extended_virial"] = model_ret[ f"{var_name}_derv_c" ].squeeze(-2) + if do_atomic_virial and model_ret.get(f"{var_name}_derv_c_mag") is not None: + model_predict["extended_virial_mag"] = model_ret[ + f"{var_name}_derv_c_mag" + ].squeeze(-2) @@ if self.backbone_model.do_grad_c(var_name): output_def["virial"] = deepcopy(out_def_data[f"{var_name}_derv_c_redu"]) output_def["virial"].squeeze(-2) output_def["atom_virial"] = deepcopy(out_def_data[f"{var_name}_derv_c"]) output_def["atom_virial"].squeeze(-2) + output_def["atom_virial_mag"] = deepcopy( + out_def_data[f"{var_name}_derv_c_mag"] + ) + output_def["atom_virial_mag"].squeeze(-2) return output_defAlso applies to: 821-831
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/dpmodel/model/spin_model.py` around lines 645 - 654, The translated spin API currently drops the magnetic atomic virials produced by process_spin_output (fields named "{var_name}_derv_c_mag"); update call(), call_lower(), and translated_output_def() to include and propagate these "{var_name}_derv_c_mag" entries alongside the existing "{var_name}_derv_c" and "{var_name}_derv_c_mag" mask handling so the user-facing outputs return magnetic atomic virials; locate where translated_output_def builds the output spec and where call()/call_lower() construct the returned dict and add the "{var_name}_derv_c_mag" keys (and any corresponding masks) for the same var_name cases already handling non-magnetic virials.
🧹 Nitpick comments (5)
source/tests/infer/deeppot_dpa_spin.yaml (1)
2390-2390: Drop the wall-clock timestamp from the canonical fixture.Committing
timemakes this YAML change on every regeneration even when the model is identical, which undercuts the goal of a canonical source artifact. Normalizing or stripping it in the generator will keep backend-regeneration diffs focused on real model changes.💡 Suggested change
- time: "2026-04-04 10:38:37.220790+00:00"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/tests/infer/deeppot_dpa_spin.yaml` at line 2390, The canonical fixture currently includes a wall-clock "time" field in source/tests/infer/deeppot_dpa_spin.yaml which causes noisy diffs; update the fixture generation/serialization step that writes canonical YAML to either omit the "time" key or normalize it to a fixed placeholder value (e.g., null or a constant) before writing, and adjust any code that writes/validates fixtures so generate_canonical_fixture / the YAML serializer drops the "time" field for canonical outputs; ensure tests that consume the fixture are updated to not expect a real timestamp.deepmd/pt_expt/infer/deep_eval.py (1)
128-139: Prefer the explicitspin_enerdiscriminator here.
deepmd/pt_expt/utils/serialization.pyalready writesmodel.type == "spin_ener"to make backend dispatch deterministic. Re-inferring spin fromspin/backbone_modelcreates a second schema to keep in sync.♻️ Minimal fix
- if "spin" in model_data and "backbone_model" in model_data: + is_spin = model_data.get("type") == "spin_ener" or ( + "spin" in model_data and "backbone_model" in model_data + ) + if is_spin: from deepmd.pt_expt.model.spin_model import ( SpinModel, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/pt_expt/infer/deep_eval.py` around lines 128 - 139, The code infers spin models by checking keys in model_data instead of using the explicit discriminator written during serialization; change the branch in deep_eval.py to inspect model_dict["model"].get("type") (or similar) for the value "spin_ener" and use that to decide between SpinModel.deserialize(...) and BaseModel.deserialize(...), setting self._dpmodel and self._is_spin accordingly (keep references to model_data, SpinModel.deserialize, BaseModel.deserialize, self._dpmodel, and self._is_spin to locate the change).source/api_cc/src/DeepSpinPTExpt.cc (1)
714-719: Minor: Trailing space in type_map string.The
get_type_mapfunction appends a space after each type including the last one, resulting in a trailing space. This is likely harmless but inconsistent with typical formatting.♻️ Suggested fix
void DeepSpinPTExpt::get_type_map(std::string& type_map_str) { - for (const auto& t : type_map) { - type_map_str += t; - type_map_str += " "; + for (size_t i = 0; i < type_map.size(); ++i) { + if (i > 0) { + type_map_str += " "; + } + type_map_str += type_map[i]; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/api_cc/src/DeepSpinPTExpt.cc` around lines 714 - 719, The get_type_map function (DeepSpinPTExpt::get_type_map) appends a space after every element in type_map which produces a trailing space; change the logic to only insert a separator between elements (e.g., loop and append the first element without a leading space then prepend a space before subsequent elements, or build the string and remove the final separator) so type_map_str contains types separated by single spaces with no trailing space.source/lmp/tests/test_lammps_spin_pt2.py (1)
145-166: Consider simplifying unit validation logic.The function has three separate
if units == "metal" ... else raise ValueErrorblocks, but units defaults to "metal" and no test calls it with a different value. Consider consolidating to a single validation at the start.♻️ Suggested simplification
def _lammps(data_file, units="metal") -> PyLammps: + if units != "metal": + raise ValueError("units for spin should be metal") lammps = PyLammps() lammps.units(units) lammps.boundary("p p p") lammps.atom_style("spin") - if units == "metal": - lammps.neighbor("2.0 bin") - else: - raise ValueError("units for spin should be metal") + lammps.neighbor("2.0 bin") lammps.neigh_modify("every 10 delay 0 check no") lammps.read_data(data_file.resolve()) - if units == "metal": - lammps.mass("1 58") - lammps.mass("2 16") - else: - raise ValueError("units for spin should be metal") - if units == "metal": - lammps.timestep(0.0005) - else: - raise ValueError("units for spin should be metal") + lammps.mass("1 58") + lammps.mass("2 16") + lammps.timestep(0.0005) lammps.fix("1 all nve") return lammps🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/lmp/tests/test_lammps_spin_pt2.py` around lines 145 - 166, The _lammps function repeats the same units check three times; consolidate by validating units once at the top of _lammps (raise ValueError if units != "metal") and remove the subsequent redundant if/else blocks, keeping the existing calls to PyLammps methods (units(), neighbor(), mass(), timestep(), neigh_modify(), read_data(), fix()) and their current arguments unchanged.source/api_cc/tests/test_deepspin_model_devi_ptexpt.cc (1)
14-16: Consider documenting the rationale for relaxed EPSILON.The EPSILON redefinition is significantly more relaxed than typical numerical precision (1e-1 for float). While spin models may need relaxed tolerances, a brief comment explaining why would help future maintainers understand this isn't accidental.
📝 Suggested documentation
-// Spin models need relaxed epsilon +// Spin models need relaxed epsilon: the magnetic derivative computations +// involve additional numerical operations that accumulate floating-point +// differences, especially for single-precision. `#undef` EPSILON `#define` EPSILON (std::is_same<VALUETYPE, double>::value ? 1e-6 : 1e-1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/api_cc/tests/test_deepspin_model_devi_ptexpt.cc` around lines 14 - 16, The EPSILON macro is redefined to be much larger for float (1e-1) and should be accompanied by a short clarifying comment; update the code around the EPSILON redefinition (the `#undef` EPSILON / `#define` EPSILON lines referencing VALUETYPE) to add a brief comment explaining that spin model tests use relaxed tolerances because iterative solvers/physical degeneracies lead to larger numerical deviations in float precision, and note that the value was chosen to avoid spurious failures while preserving test intent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@deepmd/pd/infer/deep_eval.py`:
- Around line 300-305: The handler for Paddle's has_spin currently leaves
self._has_spin as a callable causing get_use_spin to always return [] for
callable has_spin; update the initialization so that when self._has_spin is
callable you invoke it (i.e. set self._has_spin = self._has_spin()) to mirror
the PyTorch backend, ensuring get_use_spin() reads the boolean value correctly
from the model.spin usage; locate the attribute initialization where
self._has_spin is assigned and replace the assignment when callable to a call of
the function.
In `@deepmd/pt_expt/infer/deep_eval.py`:
- Around line 805-813: In _eval_model_spin(), mirror the pt2 fparam fallback
used in _eval_model: when fparam is None and the model is a .pt2 (or
default_fparam exists in metadata), populate fparam from default_fparam and then
construct fparam_t the same way as in the non-spin path (reshape to (nframes,
self.get_dim_fparam()) and call torch.tensor(..., dtype=torch.float64,
device=DEVICE)); only leave fparam_t as None if neither fparam nor
default_fparam is available. Target symbols: _eval_model_spin, fparam,
default_fparam, fparam_t, self.get_dim_fparam(), DEVICE.
- Around line 326-335: The dispatch currently only checks for kwargs["spin"]
presence; instead validate compatibility between the provided spin and the
loaded model: call the spin-path only if spins is provided AND the spin-capable
evaluator exists (self._eval_model_spin or a boolean like self._model_has_spin),
and call the non-spin path only if spins is not provided AND the non-spin
evaluator exists; if the caller provides spins but the model has no spin
evaluator, raise a clear ValueError mentioning _eval_model_spin, and if the
model requires spins but none were provided, raise a ValueError mentioning
_eval_model; convert spins to np.array only when you will call the spin
evaluator. Ensure error messages reference the relevant symbols
(_eval_model_spin, _eval_model) so the failing case is obvious.
In `@source/api_cc/src/commonPTExpt.h`:
- Around line 55-59: The parse flow allows malformed JSON to slip through;
update parse(), parse_array(), and parse_object() to validate expected tokens
and trailing data: after parse_value() in parse() call skip_ws() and ensure the
next character is EOF (or throw a parse error for trailing bytes); in
parse_array() verify the next non-whitespace character is ']' before consuming
it (and error on mismatch), and likewise in parse_object() verify '}' and ':'
where expected before consuming (error if different). Apply the same
token-verification and trailing-byte checks to the other parsing functions in
the same file/range (lines ~195-243) so malformed inputs like "[1}" or '{"rcut"
6.0}' fail fast with explicit parse errors.
- Around line 345-425: The code returns raw entry bytes without verifying the
ZIP compression method, so compressed entries (method != 0) will be returned
instead of being decompressed; fix by reading the compression method (use
read_u16(pos + 10) from the central-directory entry, or
read_u16(local_header_offset + 8) from the local header as a fallback) and if
the method is not 0 (stored), do not return compressed data—either skip this
entry (continue the loop) or return a clear error/empty result; perform this
check before computing data_offset and calling content.substr(data_offset,
uncompressed_size) and reference the variables/functions compression_method,
read_u16, local_header_offset, data_offset, compressed_size, and
uncompressed_size when making the change.
- Around line 286-303: read_zip_entry currently trusts archive-supplied offsets
and indexes content[] without bounds checks; this can cause OOB reads when
cd_offset, ZIP64 extra-field lengths, or local_header_offset are corrupt. Add a
reusable bounds-check helper (e.g., a lambda like check_range(offset, length)
that validates offset + length <= content.size()) and use it before any use of
cd_offset, before each access inside the extra-field parsing loop (validating
field_data + offset + b reads and the field length), and before reading the
local header (local_header_offset + sizeof(local header) and any subsequent
payload reads). Update read_u16/read_u32 call sites in read_zip_entry to call
the helper first so all array indexing into content is guarded.
In `@source/api_cc/src/DeepSpin.cc`:
- Around line 743-748: DeepSpinModelDevi::get_use_spin currently returns the
flags from dps[0] which makes behavior order-dependent; change it to iterate
over dps, pick the first non-empty std::vector<bool> returned by each model's
get_use_spin(), and if you encounter another non-empty vector that disagrees
(element-wise) with the first non-empty one, fail loudly (e.g., throw a
std::runtime_error or use the project's error reporting). If no model returns a
non-empty vector return an empty vector. Use the existing symbols
(DeepSpinModelDevi::get_use_spin and dps[i]->get_use_spin()) to locate and
update the logic.
In `@source/api_cc/src/DeepSpinPTExpt.cc`:
- Around line 663-671: The code is reading per-atom energies from the wrong
output key; when atomic is true, change the tensor source used to build
atom_energy_tensor from output_map["energy"] to output_map["atom_energy"] so you
extract per-atom values rather than the total reduced energy; update the
creation of atom_energy_tensor (the variable atom_energy_tensor used with
cpu_atom_energy_ and atom_energy.assign) to call
output_map["atom_energy"].view({-1}).to(floatType) and leave the remaining
conversion/assign logic unchanged.
In `@source/api_cc/tests/test_deeppot_dpa_ptexpt_spin.cc`:
- Around line 46-57: The test currently only checks BUILD_PYTORCH before calling
dp.init(model_path) but the .pt2/DeepSpinPTExpt implementation also requires
BUILD_PT_EXPT_SPIN; update both skip guards around the .pt2 model checks to use
a combined condition (if either PyTorch support is missing or pt_expt spin
support is missing) and call GTEST_SKIP with a message like "Skip: PyTorch or
pt_expt spin support is not enabled."; locate the two guarded blocks that
precede dp.init(model_path) / the .pt2 model usage and replace the existing
`#ifndef` BUILD_PYTORCH ... GTEST_SKIP with the suggested `#if`
!defined(BUILD_PYTORCH) || !BUILD_PT_EXPT_SPIN check so the test is skipped
unless both BUILD_PYTORCH and BUILD_PT_EXPT_SPIN are enabled (references:
dp.init, DeepSpinPTExpt).
In `@source/lmp/tests/test_lammps_spin_nopbc_pt2.py`:
- Around line 22-23: Replace the hard-coded module-level Path("data.lmp") and
Path("md.out") with per-test temporary files and ensure teardown_module also
removes the md file; specifically, change uses of the variables data_file and
md_file in this test module (and the setup/teardown logic referenced by
teardown_module) to create unique paths via pytest's tmp_path fixture (or
tempfile.NamedTemporaryFile/TemporaryDirectory) within each test function so
parallel/interrupted runs don't clobber each other, and update teardown_module()
to clean up both data_file and md_file (or rely on tmp_path auto-cleanup)
accordingly.
In `@source/lmp/tests/test_lammps_spin_pt2.py`:
- Around line 141-142: teardown_module currently only removes data_file, leaving
the model deviation output file (md_file) behind; update teardown_module to also
remove md_file if it exists by checking for its existence and calling os.remove
on md_file (handle potential exceptions or use os.path.exists) so both data_file
and md_file are cleaned up; refer to the teardown_module function and the
data_file and md_file variables when making the change.
In `@source/tests/infer/gen_spin_model_devi.py`:
- Around line 226-228: The dp.eval calls unpack unused return values (v, ae, av)
causing Ruff's unused-unpacked-variable error; change the unpacking in the
LAMMPS reference loops so the unused outputs are named with underscores (e.g.,
e, f, _, ae, av, fm, _ -> e, f, _, _, _, fm, _ or explicitly replace v, ae, av
with _), apply the same change to the other dp.eval call around lines 244-246,
and then run ruff check . and ruff format . before committing.
In `@source/tests/infer/gen_spin.py`:
- Around line 189-191: The unpack of dp_pth.eval currently binds v_pth, ae_pth,
and av_pth which are never used; update the unpack to drop those outputs by
replacing them with underscores (or a single underscore-per-unused) so only the
used symbols (e_pth, f_pth, fm_pth, and the trailing throwaway) are assigned;
locate the call to dp_pth.eval in the comparison block (the line assigning
e_pth, f_pth, v_pth, ae_pth, av_pth, fm_pth, _) and remove or rename the unused
variables to underscores to satisfy Ruff.
In `@source/tests/pt_expt/infer/test_deep_eval_spin.py`:
- Line 253: The unpack of dp.eval(COORD, BOX, ATYPE, atomic=False, spin=SPIN)
assigns e, f, v, fm, mm but mm is unused (same for the NoPBC unpacks at the
other locations); update those unpacks to either name the unused value with a
leading underscore (e.g., e, f, v, fm, _mm or e, f, v, fm, _) or add an explicit
assertion/unused guard (e.g., assert mm is None) so Ruff RUF059 is not
triggered; locate the calls by the function/method dp.eval and the variables
COORD, BOX, ATYPE, SPIN to apply the change consistently at lines where mm (and
corresponding NoPBC tuple members) are currently ignored.
---
Outside diff comments:
In `@deepmd/dpmodel/model/spin_model.py`:
- Around line 645-654: The translated spin API currently drops the magnetic
atomic virials produced by process_spin_output (fields named
"{var_name}_derv_c_mag"); update call(), call_lower(), and
translated_output_def() to include and propagate these "{var_name}_derv_c_mag"
entries alongside the existing "{var_name}_derv_c" and "{var_name}_derv_c_mag"
mask handling so the user-facing outputs return magnetic atomic virials; locate
where translated_output_def builds the output spec and where call()/call_lower()
construct the returned dict and add the "{var_name}_derv_c_mag" keys (and any
corresponding masks) for the same var_name cases already handling non-magnetic
virials.
---
Nitpick comments:
In `@deepmd/pt_expt/infer/deep_eval.py`:
- Around line 128-139: The code infers spin models by checking keys in
model_data instead of using the explicit discriminator written during
serialization; change the branch in deep_eval.py to inspect
model_dict["model"].get("type") (or similar) for the value "spin_ener" and use
that to decide between SpinModel.deserialize(...) and
BaseModel.deserialize(...), setting self._dpmodel and self._is_spin accordingly
(keep references to model_data, SpinModel.deserialize, BaseModel.deserialize,
self._dpmodel, and self._is_spin to locate the change).
In `@source/api_cc/src/DeepSpinPTExpt.cc`:
- Around line 714-719: The get_type_map function (DeepSpinPTExpt::get_type_map)
appends a space after every element in type_map which produces a trailing space;
change the logic to only insert a separator between elements (e.g., loop and
append the first element without a leading space then prepend a space before
subsequent elements, or build the string and remove the final separator) so
type_map_str contains types separated by single spaces with no trailing space.
In `@source/api_cc/tests/test_deepspin_model_devi_ptexpt.cc`:
- Around line 14-16: The EPSILON macro is redefined to be much larger for float
(1e-1) and should be accompanied by a short clarifying comment; update the code
around the EPSILON redefinition (the `#undef` EPSILON / `#define` EPSILON lines
referencing VALUETYPE) to add a brief comment explaining that spin model tests
use relaxed tolerances because iterative solvers/physical degeneracies lead to
larger numerical deviations in float precision, and note that the value was
chosen to avoid spurious failures while preserving test intent.
In `@source/lmp/tests/test_lammps_spin_pt2.py`:
- Around line 145-166: The _lammps function repeats the same units check three
times; consolidate by validating units once at the top of _lammps (raise
ValueError if units != "metal") and remove the subsequent redundant if/else
blocks, keeping the existing calls to PyLammps methods (units(), neighbor(),
mass(), timestep(), neigh_modify(), read_data(), fix()) and their current
arguments unchanged.
In `@source/tests/infer/deeppot_dpa_spin.yaml`:
- Line 2390: The canonical fixture currently includes a wall-clock "time" field
in source/tests/infer/deeppot_dpa_spin.yaml which causes noisy diffs; update the
fixture generation/serialization step that writes canonical YAML to either omit
the "time" key or normalize it to a fixed placeholder value (e.g., null or a
constant) before writing, and adjust any code that writes/validates fixtures so
generate_canonical_fixture / the YAML serializer drops the "time" field for
canonical outputs; ensure tests that consume the fixture are updated to not
expect a real timestamp.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: f096bea5-ecc4-48da-bcbc-0171e426c920
📒 Files selected for processing (38)
deepmd/dpmodel/infer/deep_eval.pydeepmd/dpmodel/model/base_model.pydeepmd/dpmodel/model/spin_model.pydeepmd/dpmodel/model/transform_output.pydeepmd/infer/deep_eval.pydeepmd/pd/infer/deep_eval.pydeepmd/pt/infer/deep_eval.pydeepmd/pt/model/model/spin_model.pydeepmd/pt/utils/serialization.pydeepmd/pt_expt/infer/deep_eval.pydeepmd/pt_expt/model/spin_model.pydeepmd/pt_expt/utils/serialization.pysource/api_c/tests/test_deepspin_a.ccsource/api_c/tests/test_deepspin_a_hpp.ccsource/api_cc/include/DeepSpin.hsource/api_cc/include/DeepSpinPT.hsource/api_cc/include/DeepSpinPTExpt.hsource/api_cc/src/DeepPotPTExpt.ccsource/api_cc/src/DeepSpin.ccsource/api_cc/src/DeepSpinPTExpt.ccsource/api_cc/src/commonPTExpt.hsource/api_cc/tests/test_deeppot_dpa_pt_spin.ccsource/api_cc/tests/test_deeppot_dpa_ptexpt_spin.ccsource/api_cc/tests/test_deepspin_model_devi_ptexpt.ccsource/install/test_cc_local.shsource/lmp/tests/test_lammps_spin_nopbc_pt.pysource/lmp/tests/test_lammps_spin_nopbc_pt2.pysource/lmp/tests/test_lammps_spin_pt.pysource/lmp/tests/test_lammps_spin_pt2.pysource/tests/infer/deeppot_dpa_spin.pthsource/tests/infer/deeppot_dpa_spin.yamlsource/tests/infer/deeppot_dpa_spin_md0.yamlsource/tests/infer/deeppot_dpa_spin_md1.yamlsource/tests/infer/gen_common.pysource/tests/infer/gen_spin.pysource/tests/infer/gen_spin_model_devi.pysource/tests/pt_expt/infer/test_deep_eval.pysource/tests/pt_expt/infer/test_deep_eval_spin.py
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #5370 +/- ##
==========================================
- Coverage 82.37% 82.22% -0.15%
==========================================
Files 812 817 +5
Lines 83560 84479 +919
Branches 4091 4180 +89
==========================================
+ Hits 68832 69467 +635
- Misses 13507 13736 +229
- Partials 1221 1276 +55 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Update gen_spin.py and gen_spin_model_devi.py to produce models with type_map=["Ni", "O", "H"] and use_spin=[True, False, False], adding a dummy unused type to verify correct handling. Regenerate all .yaml canonical weights and update reference values in C/C++/LAMMPS tests. Add rel=1e-4 tolerance doc for md.out model devi assertions (LAMMPS writes ~6 significant digits in default scientific format).
There was a problem hiding this comment.
♻️ Duplicate comments (4)
source/tests/infer/gen_spin_model_devi.py (1)
226-228:⚠️ Potential issue | 🟡 MinorUnderscore the unused
eval()outputs in both LAMMPS reference loops.
v,ae, andavare never read in either branch, so Ruff will flag both unpacks.♻️ Minimal fix
- e, f, v, ae, av, fm, _ = dp.eval( + e, f, _v, _ae, _av, fm, _ = dp.eval( lmp_coord, lmp_box, lmp_atype, atomic=True, spin=lmp_spin ) @@ - e, f, v, ae, av, fm, _ = dp.eval( + e, f, _v, _ae, _av, fm, _ = dp.eval( lmp_coord, None, lmp_atype, atomic=True, spin=lmp_spin )As per coding guidelines,
**/*.py: Always runruff check .andruff format .before committing changes or CI will fail.Also applies to: 244-246
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/tests/infer/gen_spin_model_devi.py` around lines 226 - 228, The unpacking from dp.eval in the LAMMPS reference loops currently binds unused names (v, ae, av) which triggers Ruff; change the unpack patterns in both places where dp.eval is called (the first call currently assigning e, f, v, ae, av, fm, _ and the second call at the other branch) to use underscores for unused outputs (e.g., e, f, _, _, _, fm, _) so only used values are named and Ruff warnings go away.source/lmp/tests/test_lammps_spin_nopbc_pt2.py (1)
22-23:⚠️ Potential issue | 🟠 MajorUse unique temp paths for
data.lmpandmd.out.These globals write into the shared
source/lmp/testsdirectory, andteardown_module()only removesdata_file. Withsource/lmp/tests/test_lammps_spin_pt2.pyusing the same names, parallel runs or interrupted reruns can clobber the input file or leave a stalemd.outfor the model-deviation assertions to read back. Please move both files under a per-test temp directory and pass those paths through the fixture instead of keeping them at module scope.Also applies to: 91-92
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/lmp/tests/test_lammps_spin_nopbc_pt2.py` around lines 22 - 23, The module-level globals data_file and md_file write into the shared tests directory and can clash between tests; refactor so both are created inside a per-test temporary directory and their paths are provided via the test fixture instead of module scope. Concretely, move creation of data_file and md_file into the fixture used by tests in this file (and the other duplicate occurrences around the second block), return both Path objects from that fixture, update test signatures to accept the fixture value rather than using the globals, and ensure teardown_module() (or the fixture finalizer) removes both files to avoid stale md.out or data.lmp across parallel or interrupted runs.source/lmp/tests/test_lammps_spin_pt2.py (1)
23-24:⚠️ Potential issue | 🟠 MajorIsolate all generated LAMMPS artifacts per test.
data.lmp,md.out, and the virialdumpfile are fixed names in the shared test directory, whileteardown_module()only removesdata_file. That now races directly withsource/lmp/tests/test_lammps_spin_nopbc_pt2.pyand can also leave stale outputs behind for subsequent reruns. Please create these files under a per-test temp directory and thread the paths through the fixture.Also applies to: 141-142, 201-203
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/lmp/tests/test_lammps_spin_pt2.py` around lines 23 - 24, Tests use shared filenames (data_file, md_file and the virial dump) in source/lmp/tests/test_lammps_spin_pt2.py which races with other tests and leaves stale outputs; change the tests to create a unique per-test temp directory (use pytest tmp_path or a fixture) and construct data_file, md_file and the virial dump path under that directory, thread these paths through the relevant fixture(s) and the tests (functions that reference data_file/md_file/teardown_module()), and update teardown_module() to remove all generated artifacts in that temp dir; apply the same change to the other occurrences noted around the blocks referencing data_file/md_file (the sections corresponding to lines ~141-142 and ~201-203).source/tests/infer/gen_spin.py (1)
189-195:⚠️ Potential issue | 🟡 MinorCompare the
.pthvirial outputs here, or stop unpacking them.This verification block never reads
v_pth,ae_pth, orav_pth, so Ruff will still flag the unpack. Since this PR’s backend-alignment change is on atomic virials, I’d prefer comparing those arrays here; otherwise rename them to dummy targets.♻️ Lint-only fallback
- e_pth, f_pth, v_pth, ae_pth, av_pth, fm_pth, _ = dp_pth.eval( + e_pth, f_pth, _v_pth, _ae_pth, _av_pth, fm_pth, _ = dp_pth.eval( coord, box, atype, atomic=True, spin=spin )As per coding guidelines,
**/*.py: Always runruff check .andruff format .before committing changes or CI will fail.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/tests/infer/gen_spin.py` around lines 189 - 195, The unpacked outputs from dp_pth.eval (v_pth, ae_pth, av_pth) are not used and trigger lint errors; update the verification to actually compare the atomic virials returned by dp_pth.eval (av_pth) against the test reference (e.g., av1) by adding a print of the max absolute diff (similar to the existing energy/force checks) so av_pth is consumed, or if you don't want to compare virials rename v_pth/ae_pth/av_pth to dummy names (_) when calling dp_pth.eval; locate the call to dp_pth.eval and the nearby prints (e_pth, f_pth, v_pth, ae_pth, av_pth, fm_pth, _ = dp_pth.eval(...)) and either add the av_pth vs av1 comparison print or change v_pth/ae_pth/av_pth to _ to satisfy the linter.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@source/lmp/tests/test_lammps_spin_nopbc_pt2.py`:
- Around line 22-23: The module-level globals data_file and md_file write into
the shared tests directory and can clash between tests; refactor so both are
created inside a per-test temporary directory and their paths are provided via
the test fixture instead of module scope. Concretely, move creation of data_file
and md_file into the fixture used by tests in this file (and the other duplicate
occurrences around the second block), return both Path objects from that
fixture, update test signatures to accept the fixture value rather than using
the globals, and ensure teardown_module() (or the fixture finalizer) removes
both files to avoid stale md.out or data.lmp across parallel or interrupted
runs.
In `@source/lmp/tests/test_lammps_spin_pt2.py`:
- Around line 23-24: Tests use shared filenames (data_file, md_file and the
virial dump) in source/lmp/tests/test_lammps_spin_pt2.py which races with other
tests and leaves stale outputs; change the tests to create a unique per-test
temp directory (use pytest tmp_path or a fixture) and construct data_file,
md_file and the virial dump path under that directory, thread these paths
through the relevant fixture(s) and the tests (functions that reference
data_file/md_file/teardown_module()), and update teardown_module() to remove all
generated artifacts in that temp dir; apply the same change to the other
occurrences noted around the blocks referencing data_file/md_file (the sections
corresponding to lines ~141-142 and ~201-203).
In `@source/tests/infer/gen_spin_model_devi.py`:
- Around line 226-228: The unpacking from dp.eval in the LAMMPS reference loops
currently binds unused names (v, ae, av) which triggers Ruff; change the unpack
patterns in both places where dp.eval is called (the first call currently
assigning e, f, v, ae, av, fm, _ and the second call at the other branch) to use
underscores for unused outputs (e.g., e, f, _, _, _, fm, _) so only used values
are named and Ruff warnings go away.
In `@source/tests/infer/gen_spin.py`:
- Around line 189-195: The unpacked outputs from dp_pth.eval (v_pth, ae_pth,
av_pth) are not used and trigger lint errors; update the verification to
actually compare the atomic virials returned by dp_pth.eval (av_pth) against the
test reference (e.g., av1) by adding a print of the max absolute diff (similar
to the existing energy/force checks) so av_pth is consumed, or if you don't want
to compare virials rename v_pth/ae_pth/av_pth to dummy names (_) when calling
dp_pth.eval; locate the call to dp_pth.eval and the nearby prints (e_pth, f_pth,
v_pth, ae_pth, av_pth, fm_pth, _ = dp_pth.eval(...)) and either add the av_pth
vs av1 comparison print or change v_pth/ae_pth/av_pth to _ to satisfy the
linter.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: a508a26a-f360-44cb-a4f6-af3f3b3eab2a
📒 Files selected for processing (13)
source/api_c/tests/test_deepspin_a.ccsource/api_c/tests/test_deepspin_a_hpp.ccsource/api_cc/tests/test_deeppot_dpa_pt_spin.ccsource/api_cc/tests/test_deeppot_dpa_ptexpt_spin.ccsource/lmp/tests/test_lammps_spin_nopbc_pt.pysource/lmp/tests/test_lammps_spin_nopbc_pt2.pysource/lmp/tests/test_lammps_spin_pt.pysource/lmp/tests/test_lammps_spin_pt2.pysource/tests/infer/deeppot_dpa_spin.yamlsource/tests/infer/deeppot_dpa_spin_md0.yamlsource/tests/infer/deeppot_dpa_spin_md1.yamlsource/tests/infer/gen_spin.pysource/tests/infer/gen_spin_model_devi.py
✅ Files skipped from review due to trivial changes (3)
- source/tests/infer/deeppot_dpa_spin.yaml
- source/api_cc/tests/test_deeppot_dpa_ptexpt_spin.cc
- source/tests/infer/deeppot_dpa_spin_md0.yaml
🚧 Files skipped from review as they are similar to previous changes (4)
- source/api_cc/tests/test_deeppot_dpa_pt_spin.cc
- source/api_c/tests/test_deepspin_a_hpp.cc
- source/lmp/tests/test_lammps_spin_nopbc_pt.py
- source/lmp/tests/test_lammps_spin_pt.py
…Expt Add coordinate shift in the NoPBC code path of DeepSpinPTExpt::compute, matching the existing logic in DeepPotPTExpt. Without the shift, negative or out-of-box coordinates could produce incorrect neighbor lists via copy_coord.
Both _eval_model() and _eval_model_spin() guarded the default_fparam fallback with self._is_pt2, but .pte models also export fparam as a required positional input when dim_fparam > 0. Passing None crashes with "NoneType has no attribute size". Remove the _is_pt2 guard so the fallback applies to both .pt2 and .pte. Add TestSpinDefaultFparam test covering both formats.
There was a problem hiding this comment.
♻️ Duplicate comments (2)
source/tests/pt_expt/infer/test_deep_eval_spin.py (1)
253-253:⚠️ Potential issue | 🟡 MinorRename or assert the unused unpack targets.
These tuples still leave
mm,v,av,mm_no, andmm_exunused, so Ruff will raiseRUF059. Prefix them with_or add assertions if you want to keep covering those outputs.Suggested cleanup
- e, f, v, fm, mm = dp.eval(COORD, BOX, ATYPE, atomic=False, spin=SPIN) + e, f, v, fm, _mm = dp.eval(COORD, BOX, ATYPE, atomic=False, spin=SPIN) @@ - e, f, v, ae, av, fm, mm = dp.eval(COORD, None, ATYPE, atomic=True, spin=SPIN) + e, f, _v, ae, _av, fm, _mm = dp.eval( + COORD, None, ATYPE, atomic=True, spin=SPIN + ) @@ - e, f, v, fm, mm = dp.eval(COORD, None, ATYPE, atomic=False, spin=SPIN) + e, f, _v, fm, _mm = dp.eval(COORD, None, ATYPE, atomic=False, spin=SPIN) @@ - e_no, f_no, v_no, fm_no, mm_no = dp.eval( + e_no, f_no, v_no, fm_no, _mm_no = dp.eval( COORD, BOX, ATYPE, atomic=False, spin=SPIN ) @@ - e_ex, f_ex, v_ex, fm_ex, mm_ex = dp.eval( + e_ex, f_ex, v_ex, fm_ex, _mm_ex = dp.eval( COORD, BOX, ATYPE, atomic=False, spin=SPIN, fparam=[0.5] )As per coding guidelines,
**/*.py: Always runruff check .andruff format .before committing changes or CI will fail.Also applies to: 280-280, 310-310, 372-376
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/tests/pt_expt/infer/test_deep_eval_spin.py` at line 253, The unpacking from dp.eval (e.g., the call where you assign e, f, v, fm, mm = dp.eval(COORD, BOX, ATYPE, atomic=False, spin=SPIN)) leaves several targets unused which trips RUF059; rename unused variables by prefixing them with an underscore (e.g., _mm, _v, _fm) or add explicit assertions (assert mm is not None) if you intend to keep them, and apply the same change to the other dp.eval/unpack sites mentioned (the similar unpackings around the other occurrences) so unused tuple elements are either asserted or renamed with a leading underscore.deepmd/pt_expt/infer/deep_eval.py (1)
326-335:⚠️ Potential issue | 🟠 MajorValidate
spinagainst the loaded model before dispatch.Lines 326-335 still route solely on whether
kwargs["spin"]is present. A spin model withoutspinfalls into_eval_model(), and a non-spin model with strayspinfalls into_eval_model_spin(), so both cases fail later with a low-level runner signature error instead of a clear API error.Minimal fix
request_defs = self._get_request_defs(atomic) spins = kwargs.get("spin") + if self._is_spin and spins is None: + raise ValueError("spin input is required for spin models") + if not self._is_spin and spins is not None: + raise ValueError("spin input is only supported for spin models") if spins is not None: spins = np.array(spins) out = self._eval_func(self._eval_model_spin, numb_test, natoms)( coords, cells, atom_types, spins, fparam, aparam, request_defs )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/pt_expt/infer/deep_eval.py` around lines 326 - 335, Validate spin/model compatibility before calling eval: check whether spins = kwargs.get("spin") is provided and whether the loaded model supports spin by testing the presence/callability of self._eval_model_spin (and/or self._eval_model). If spins is not None but self._eval_model_spin is missing or not callable, raise a clear API error (e.g. ValueError) saying a spin value was provided for a non-spin model; if spins is None but self._eval_model_spin exists (model requires spin), raise a clear API error saying spin is required for this model. After these checks, dispatch to self._eval_model_spin(...) when spins present and supported, otherwise to self._eval_model(...).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@deepmd/pt_expt/infer/deep_eval.py`:
- Around line 326-335: Validate spin/model compatibility before calling eval:
check whether spins = kwargs.get("spin") is provided and whether the loaded
model supports spin by testing the presence/callability of self._eval_model_spin
(and/or self._eval_model). If spins is not None but self._eval_model_spin is
missing or not callable, raise a clear API error (e.g. ValueError) saying a spin
value was provided for a non-spin model; if spins is None but
self._eval_model_spin exists (model requires spin), raise a clear API error
saying spin is required for this model. After these checks, dispatch to
self._eval_model_spin(...) when spins present and supported, otherwise to
self._eval_model(...).
In `@source/tests/pt_expt/infer/test_deep_eval_spin.py`:
- Line 253: The unpacking from dp.eval (e.g., the call where you assign e, f, v,
fm, mm = dp.eval(COORD, BOX, ATYPE, atomic=False, spin=SPIN)) leaves several
targets unused which trips RUF059; rename unused variables by prefixing them
with an underscore (e.g., _mm, _v, _fm) or add explicit assertions (assert mm is
not None) if you intend to keep them, and apply the same change to the other
dp.eval/unpack sites mentioned (the similar unpackings around the other
occurrences) so unused tuple elements are either asserted or renamed with a
leading underscore.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 01459a32-6738-45e6-8cac-0104127babef
📒 Files selected for processing (2)
deepmd/pt_expt/infer/deep_eval.pysource/tests/pt_expt/infer/test_deep_eval_spin.py
…tests - Raise ValueError when spin model is called without spin input or non-spin model is called with spin input - Add test_eval_spin_model_requires_spin for both .pt2/.pte - Add test_nonspin_model_rejects_spin for .pt2 - Add TestSpinAparam: verify aparam takes effect at eager model level (torch.export doesn't support dynamic nframes with aparam yet) - Add test_pte_eval_default_fparam: extend default_fparam coverage to .pte
There was a problem hiding this comment.
🧹 Nitpick comments (1)
deepmd/pt_expt/infer/deep_eval.py (1)
756-907: Consider extracting shared logic between_eval_modeland_eval_model_spin.The neighbor list building (lines 778-802), fparam/aparam handling (lines 815-847), and result collection (lines 892-907) are nearly identical to
_eval_model. While acceptable for this PR, extracting common helpers (e.g.,_build_extended_inputs,_prepare_fparam_aparam,_collect_results) would reduce duplication and maintenance burden.The virial preservation pattern (lines 879-890) is critical for spin models and correctly saves
energy_derv_c_redubeforecommunicate_extended_outputto preserve virtual atom contributions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/pt_expt/infer/deep_eval.py` around lines 756 - 907, Duplicate logic between _eval_model and _eval_model_spin should be extracted: create helper methods (e.g., _build_extended_inputs to handle neighbor-list branching and return ext_coord_t/ext_atype_t/nlist_t/mapping_t, _prepare_fparam_aparam to encapsulate fparam/aparam creation and default-fparam handling, and _collect_results to translate model output into the requested request_defs) and replace the repeated blocks in both functions; ensure the virial preservation pattern around communicate_extended_output (saving/restoring "energy_derv_c_redu") is preserved exactly when refactoring so spin models keep virtual-atom virial contributions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@deepmd/pt_expt/infer/deep_eval.py`:
- Around line 756-907: Duplicate logic between _eval_model and _eval_model_spin
should be extracted: create helper methods (e.g., _build_extended_inputs to
handle neighbor-list branching and return
ext_coord_t/ext_atype_t/nlist_t/mapping_t, _prepare_fparam_aparam to encapsulate
fparam/aparam creation and default-fparam handling, and _collect_results to
translate model output into the requested request_defs) and replace the repeated
blocks in both functions; ensure the virial preservation pattern around
communicate_extended_output (saving/restoring "energy_derv_c_redu") is preserved
exactly when refactoring so spin models keep virtual-atom virial contributions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 21531408-29f8-433a-96e3-ca836e9b9ceb
📒 Files selected for processing (3)
deepmd/pt_expt/infer/deep_eval.pysource/tests/pt_expt/infer/test_deep_eval_spin.pysource/tests/pt_expt/test_dp_freeze.py
- Fix torch.export duck-sizing collision: choose sample nframes that doesn't match any other tensor dimension, preventing spurious guards that lock nframes to a constant - Add aparam validation in eval: raise ValueError when dim_aparam > 0 but aparam is not provided (both spin and non-spin paths) - Upgrade TestSpinAparam to test through .pt2/.pte (was eager-only) - Add test_eval_without_aparam_raises for both .pt2/.pte
Move default_fparam and aparam tests from test_dp_freeze.py (freeze entrypoint tests) to test_deep_eval.py (inference tests) where they belong. Add .pte and .pt2 test classes for both, with consistency checks against eager model forward and cross-format comparison.
AOTInductor internally creates tensors using PyTorch's default device.
When pt/__init__.py sets torch.set_default_device("cuda:9999999") as a
test sentinel, spin .pt2 compilation fails on CPU-only CI with "Torch
not compiled with CUDA enabled". Clear the default device around
deserialize_to_file calls in spin test fixtures, matching the pattern
already used in non-spin .pt2 tests.
Add md_file cleanup to teardown_module in spin pt2 LAMMPS tests. Add inline comment documenting PyTorch ZIP STORED assumption.
Summary
add_magbug in dpmodelSpinModel: atomic virial usedadd_mag=Falsewhile PT backend usedadd_mag=True, causing inconsistent per-atom virial between .pth and .pt2"type": "spin_ener"to spin model serialization for correct deserialization dispatch across all backends.pthmodel file with canonical.yaml(dpmodel serialization);.pthand.pt2are now regenerated from.yamlviaconvert_backendat test timeTest plan
Summary by CodeRabbit