Releases: pycroscopy/pyUSID
0.0.12
0.0.11
This version maintains compatibility with the latest numpy, and also re-instates comp_utils that was removed from sidpy, causing version incompatibility. Furthermore, some dependency requirements have been updated. Whilst there are no feature improvements, this release ensures continued functionality of pyUSID.
0.0.10r2
0.0.10
Note: Legacy holdouts such as write_utils
will be removed in the next version.
- Minor bug fixes
- Function that writes
sidpy.Dataset
to USID formatted HDF5 files. - Refactoring
- Renamed
numpy_translator
toarray_translator
- Renamed
write_utils
toanc_build_utils
- Separate module for
Dimension
- Removed unnecessary files like
dtype_utils
,io_utils
- Updated imports to use
sidpy
instead ofpyUSID
- Renamed
0.0.9
Major changes
- Moved USID-agnostic functions in following modules to to new package -
sidpy
since these will be shared with new sister packagepyNSID
. The skeletons of the moved methods and classes will remain in pyUSID but are actually callingsidpy
underneath. Deprecated local functions will be removed in a later release. Users are advised to usesidpy
instead. List of affected modules:pyUSID.io
:.io_utils.py
.write_utils.py
.translator.py
.reg_ref.py
.dtype_utils.py
.hdf_utils.simple.py
,
pyUSID.viz
:.plot_utils.py
.jupyter_utils.py
pyUSID.processing.comp_utils.py
Minor changes
- bug fixes
- New "10 minutes to pyUSID document" thanks to @rajgiriUW
- Fixed bug regarding dask array transpose in
pyUSID.io.USIDataset
- Not writing region references for ancillary datasets anymore. Region reference functionality will be completely removed from pyUSID in an upcoming release since it is not being used meaningfully anywhere.
- Enabling multiple instances of
pyUSID.Process
class to be executed in parallel via each MPI rank in order to facilitate ensemble / embarrassingly parallel processing. - No longer pushing tests directory when deploying via pip.
site-packages
directory will no longer have tests belonging to pyUSID. Users interested in adding, evaluating pyUSID tests are recommended to git clone the GitHub repository instead
0.0.8
Ability to work on multiple HDF5 files
The io functions in this version check and facilitate I/O operations between multiple HDF5 files. Importantly these capabilities have been passed onto the Process
class which now enables child classes to check for and write results into different files if required.
These upgrades are in preparation for use with DataFed which is capable of capturing provenance across multiple files / records. Now, it is not necessary for results to be contained within the same HDF5 file for provenance tracking.
Note: Operations by default will continue to operate within the same files unless appropriate keyword arguments are passed.
0.0.7
Major user-facing changes
pyUSID
io
hdf_utils
reshape_to_n_dims
now sorts dimensions from slowest to fastest instead of fastest to slowest in positions and spectroscopic.write_ind_val_dsets
now allows dimensions to be ordered either by fastest to slowest (default) or vice versa. Default will be swapped in next releasewrite_main_dataset
now assumes by default that the dimensions are arranged from fastest varying to slowest varying. Dimensions can now be specified in opposite order as well by settingslow_to_fast
toTrue
USIDataset
'ssort_dims
now sorts dimensions from slowest to fastest varyingTranslator
is_valid_file
- implemented new function that returns the path of the input file that must be provided totranslate()
generate_dummy_main_parms()
removed
processing
parallel_compute
now usesmultiprocessing
backend by defaultprocess
Process
now requiresprocess_name
and optionallyparms_dict
to be passed to__init__
for automated checking of existing resultsProcess
can now return dask.array object fromread_data_chunk()
parallel_compute
- empty copy removed from this legacy position
Major Internal changes
Features
- New function -
pyUSID.io.USIDataset.__slice_n_dim_form()
accelerates and simplifies slicing of datasets having an N-dimensional form (most popular use-case)
More robust checking
pyUSID.io.hdf_utils.reshape_to_n_dims
pyUSID.processing.Process
pyUSID.io.USIDataset
pyUSID.io.hdf_utils.simple.check_and_link_ancillary()
Bug fixes:
pyUSID.Process
:- Now able to specify which partial group to continue computation from
- Corrected identification of existing partial and complete results
pyUSID.io.hdf_utils.get_unit_values
now catches dimensions in spectroscopic indices that do not start from0
- Explicitly stating mode of opening hdf5 files per new h5py versions
- Better forward compatible import statements
Process
cookbook now fixed to use
Testing infrastructure:
- Standard
BEPS
dataset- Each dimension's values and the N-dimensional form also written to HDF5 for quick and accurate testing in both
hdf_utils
andUSIDataset
- Results now are compound and complex-valued to cover more ground in testing
- Now being used to test
USIDataset
as well
- Each dimension's values and the N-dimensional form also written to HDF5 for quick and accurate testing in both
- Far more extensive tests on reshaping with varying dimension ordering,
numpy
vsdask.array
etc. - Built configurable base tests that whose arguments can be changed to simulate different unit tests in
hdf_utils/model
Added unit tests for following:
io
io_utils
hdf_utils
print_tree
reshape_to_n_dims
USIDataset
processing
Process
comp_utils
Code coverage rose from 51% to 64%. viz
is the last remaining sub-package that needs significant testing
0.0.6.3
- Bug fixes associated with
pyUSID.io.dtype_utils.stack_real_to_target_dtype
- Bug fixes associated with latest
numpy
andh5py
versions breakingpyUSID.USIDataset
- Improvements to
MPI
portions of thepyUSID.Process
class - Parts of
pyUSID.viz.plot_utils
now work withdask.array
inputs