Add support for PyTorch
The first thing to do is:
- [x] Add PyTorch to pyproject.toml as an optional dependency, and make it a new extra option.
- [x] Add PyTorch to the `test` option, so that it will be installed before running pytest. This would be the ideal setup, but I'm not sure if it can be installed in the same environment of TensorFlow.
- [x] Regenerate the lock file, by manually running the CI job. This must be done on the docker image used for the CI, and not on your pc.
- [x] Download the artifact with the lock file, and upload it to the repo.
Now you can add a new `PyTorchModel` class, inside the `surrogates` module.
For that to work, you'll have to provide a concrete implementation for:
- [x] `_load_model()`
- [x] `_save_model()`
- [x] `_predict_model_jacobian()`
- [x] `_predict_model_jvp()`
- [x] `_predict_model_output()`
- [x] `_predict_model_output_and_jacobian_fwd()`
- [x] `_predict_model_output_and_jacobian_rev()`
- [x] `_predict_model_vjp()`
After this, you should:
- [x] Add the test to reach 100% coverage. These should cover the SISO, SIMO, MISO and MIMO cases. I prefer to keep the repo light, and waste some CI time, therefore the models should be generated during the CI, and never uploaded to the repo.
- [ ] The OpenMDAO components should work out of the box, but adding some test for that would be nice.
- [ ] Write the documentation.
- [x] Increment the version number. Since I'm trying to use [semantic versioning](https://semver.org/), you should change the MINOR version.
Please check the existing code for TensorFlow for a guideline.
issue