low_res_data: Low resolution data as a dictionary with the key set to `channel_{i}_{k}`, where i is a number between 1 and 4 and k is a letter between A and D. For each dictionary entry, a numpy array is expected with shape (train_id, ToF channel).
low_res_data: Low resolution data as a dictionary with the key set to `channel_{i}_{k}`,
high_res_data: Reference high resolution data with a one-to-one match to the low resolution data in the train_id dimension. Shape (train_id, ToF channel).
where i is a number between 1 and 4 and k is a letter between A and D.
For each dictionary entry, a numpy array is expected with shape
(train_id, ToF channel).
high_res_data: Reference high resolution data with a one-to-one match to the
low resolution data in the train_id dimension. Shape (train_id, ToF channel).
high_res_photon_energy: Photon energy axis for the high-resolution data.
high_res_photon_energy: Photon energy axis for the high-resolution data.
Returns: Smoothened high resolution spectrum.
Returns: Smoothened high resolution spectrum.
...
@@ -272,7 +316,9 @@ class Model(object):
...
@@ -272,7 +316,9 @@ class Model(object):
Args:
Args:
low_res_data: Low resolution data as in the fit step with shape (train_id, channel, ToF channel).
low_res_data: Low resolution data as in the fit step with shape (train_id, channel, ToF channel).
Returns: High resolution data with shape (train_id, ToF channel, 3). The component 0 of the last dimension is the predicted spectrum. Components 1 and 2 correspond to two sources of uncertainty.
Returns: High resolution data with shape (train_id, ToF channel, 3).
The component 0 of the last dimension is the predicted spectrum.
Components 1 and 2 correspond to two sources of uncertainty.
"""
"""
low_res=self.preprocess_low_res(low_res_data)
low_res=self.preprocess_low_res(low_res_data)
low_pca=self.lr_pca.transform(low_res)
low_pca=self.lr_pca.transform(low_res)
...
@@ -297,23 +343,27 @@ class Model(object):
...
@@ -297,23 +343,27 @@ class Model(object):
Args:
Args:
filename: H5 file name where to save this.
filename: H5 file name where to save this.
"""
"""
#joblib.dump(self, filename)
withh5py.File(filename,'w')ashf:
withh5py.File(filename,'w')ashf:
# transform parameters into a dict
d=self.fit_model.as_dict()
d=self.fit_model.as_dict()
d.update(self.parameters())
d.update(self.parameters())
forkey,valueind.items():
# dump them in the file
ifisinstance(value,int):
dump_in_group(d,hf)
hf.attrs[key]=value
else:
hf.create_dataset(key,data=value)
# this is not ideal, because it depends on the knowledge of the PCA
# this is not ideal, because it depends on the knowledge of the PCA
# object structure, but saving to a joblib file would mean creating several
# object structure, but saving to a joblib file would mean creating several