Skip to content

Converters and Visualizer

About the Converter API

The Converter API is a new feature in the v0.12.x release of SpectraFit with major focus on:

  1. Data Validation
  2. Settings Management

In general, input and data files are converted to the internal data format, which are dictionaries for the input data and pandas dataframes for the data files. The Converter API is realized by using the ABC-class and the @abstractmethod decorator, while the File API is using the pydantic library.

Meta Data Converter Class

Abstract base class for the converter plugins.

Converter

Bases: ABC

Abstract base class for the converter plugin.

The abstract base class is used to define the interface for the converter plugins:

  • get_args: Get the arguments from the command line.
  • convert: Convert the input file to the output file.
  • call: Call the converter plugin.

Currently used for:

  • Convertion of the input file.
  • Convertion of the output file.
Source code in spectrafit/plugins/converter.py
Python
class Converter(ABC):
    """Abstract base class for the converter plugin.

    The abstract base class is used to define the interface for the converter plugins:

    - get_args: Get the arguments from the command line.
    - convert: Convert the input file to the output file.
    - __call__: Call the converter plugin.

    Currently used for:

    - Convertion of the input file.
    - Convertion of the output file.
    """

    @abstractmethod
    def get_args(self) -> Dict[str, Any]:
        """Get the arguments from the command line.

        Returns:
            Dict[str, Any]: Return the input file arguments as a dictionary without
                 additional information beyond the command line arguments.

        Raises:
            ValueError: If the output file format is not supported.
        """

    @staticmethod
    @abstractmethod
    def convert(infile: Path, file_format: str) -> MutableMapping[str, Any]:
        """Convert the input file to the target file format.

        It is an abstract method and must be implemented in the derived class.

        Args:
            infile (Path): Input file as a path object.
            file_format (str): Target file format.

        Returns:
            MutableMapping[str, Any]: Converted file as a dictionary.
        """

    @abstractmethod
    def save(self, data: Any, fname: Path, export_format: str) -> None:
        """Save the data to the target file format.

        Args:
            data (Any): Data to save.
            fname (Path): Filename of the target file.
            export_format (str): Target file format.
        """

    @abstractmethod
    def __call__(self) -> None:
        """Call the converter plugin."""

__call__() abstractmethod

Call the converter plugin.

Source code in spectrafit/plugins/converter.py
Python
@abstractmethod
def __call__(self) -> None:
    """Call the converter plugin."""

convert(infile, file_format) staticmethod abstractmethod

Convert the input file to the target file format.

It is an abstract method and must be implemented in the derived class.

Parameters:

Name Type Description Default
infile Path

Input file as a path object.

required
file_format str

Target file format.

required

Returns:

Type Description
MutableMapping[str, Any]

MutableMapping[str, Any]: Converted file as a dictionary.

Source code in spectrafit/plugins/converter.py
Python
@staticmethod
@abstractmethod
def convert(infile: Path, file_format: str) -> MutableMapping[str, Any]:
    """Convert the input file to the target file format.

    It is an abstract method and must be implemented in the derived class.

    Args:
        infile (Path): Input file as a path object.
        file_format (str): Target file format.

    Returns:
        MutableMapping[str, Any]: Converted file as a dictionary.
    """

get_args() abstractmethod

Get the arguments from the command line.

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: Return the input file arguments as a dictionary without additional information beyond the command line arguments.

Raises:

Type Description
ValueError

If the output file format is not supported.

Source code in spectrafit/plugins/converter.py
Python
@abstractmethod
def get_args(self) -> Dict[str, Any]:
    """Get the arguments from the command line.

    Returns:
        Dict[str, Any]: Return the input file arguments as a dictionary without
             additional information beyond the command line arguments.

    Raises:
        ValueError: If the output file format is not supported.
    """

save(data, fname, export_format) abstractmethod

Save the data to the target file format.

Parameters:

Name Type Description Default
data Any

Data to save.

required
fname Path

Filename of the target file.

required
export_format str

Target file format.

required
Source code in spectrafit/plugins/converter.py
Python
@abstractmethod
def save(self, data: Any, fname: Path, export_format: str) -> None:
    """Save the data to the target file format.

    Args:
        data (Any): Data to save.
        fname (Path): Filename of the target file.
        export_format (str): Target file format.
    """

Input and Output File Converter for object-oriented formats

Convert the input and output files to the preferred file format.

FileConverter

Bases: Converter

Convert the input and output file to the preferred file format.

Supported file formats

Currently supported file formats:

-[x] JSON -[x] YAML (YML) -[x] TOML (LOCK for the lock file)

Source code in spectrafit/plugins/file_converter.py
Python
class FileConverter(Converter):
    """Convert the input and output file to the preferred file format.

    !!! info "Supported file formats"

        Currently supported file formats:

        -[x] JSON
        -[x] YAML (YML)
        -[x] TOML (LOCK for the lock file)
    """

    def get_args(self) -> Dict[str, Any]:
        """Get the arguments from the command line.

        Returns:
            Dict[str, Any]: Return the input file arguments as a dictionary without
                additional information beyond the command line arguments.
        """
        parser = argparse.ArgumentParser(
            description="Converter for 'SpectraFit' input and output files."
        )
        parser.add_argument(
            "infile",
            type=Path,
            help="Filename of the 'SpectraFit' input or output file.",
        )
        parser.add_argument(
            "-f",
            "--file-format",
            help="File format for the conversion.",
            type=str,
            choices=choices,
        )
        parser.add_argument(
            "-e",
            "--export-format",
            help="File format for the export.",
            type=str,
            default="json",
            choices=choices,
        )
        return vars(parser.parse_args())

    @staticmethod
    def convert(infile: Path, file_format: str) -> MutableMapping[str, Any]:
        """Convert the input file to the output file.

        Args:
            infile (Path): The input file as a path object.
            file_format (str): The output file format.

        Raises:
            ValueError: If the input file format is not supported.

        Returns:
            MutableMapping[str, Any] : The converted file as a dictionary.
        """
        if file_format not in choices:
            raise ValueError(f"The input file format '{file_format}' is not supported.")

        return read_input_file(infile)

    def save(self, data: Any, fname: Path, export_format: str) -> None:
        """Save the converted file.

        Raises:
            ValueError: If the input file format is identical with the output format.
            ValueError: If the output file format is not supported.

        Args:
            data (Any): The converted file as a dictionary.
            fname (Path): The input file as a path object.
            export_format (str): The output file format.
        """
        if fname.suffix[1:] == export_format:
            raise ValueError(
                f"The input file suffix '{fname.suffix[1:]}' is similar to the"
                f" output file format '{export_format}'."
                "Please use a different output file suffix."
            )

        if export_format not in choices:
            raise ValueError(
                f"The output file format '{export_format}' is not supported."
            )

        if export_format == "json":
            with open(
                fname.with_suffix(f".{export_format}"), "w", encoding="utf-8"
            ) as f:
                json.dump(data, f, indent=4)
        elif export_format in {"yaml", "yml"}:
            with open(
                fname.with_suffix(f".{export_format}"), "w", encoding="utf-8"
            ) as f:
                yaml.dump(data, f, default_flow_style=False)
        elif export_format in {"toml", "lock"}:
            with open(
                fname.with_suffix(f".{export_format}"),
                "wb+",
            ) as f:
                tomli_w.dump(dict(**data), f)

    def __call__(self) -> None:
        """Run the converter via cmd commands."""
        args = self.get_args()
        self.save(
            data=self.convert(infile=args["infile"], file_format=args["file_format"]),
            fname=args["infile"],
            export_format=args["export_format"],
        )

__call__()

Run the converter via cmd commands.

Source code in spectrafit/plugins/file_converter.py
Python
def __call__(self) -> None:
    """Run the converter via cmd commands."""
    args = self.get_args()
    self.save(
        data=self.convert(infile=args["infile"], file_format=args["file_format"]),
        fname=args["infile"],
        export_format=args["export_format"],
    )

convert(infile, file_format) staticmethod

Convert the input file to the output file.

Parameters:

Name Type Description Default
infile Path

The input file as a path object.

required
file_format str

The output file format.

required

Raises:

Type Description
ValueError

If the input file format is not supported.

Returns:

Type Description
MutableMapping[str, Any]

MutableMapping[str, Any] : The converted file as a dictionary.

Source code in spectrafit/plugins/file_converter.py
Python
@staticmethod
def convert(infile: Path, file_format: str) -> MutableMapping[str, Any]:
    """Convert the input file to the output file.

    Args:
        infile (Path): The input file as a path object.
        file_format (str): The output file format.

    Raises:
        ValueError: If the input file format is not supported.

    Returns:
        MutableMapping[str, Any] : The converted file as a dictionary.
    """
    if file_format not in choices:
        raise ValueError(f"The input file format '{file_format}' is not supported.")

    return read_input_file(infile)

get_args()

Get the arguments from the command line.

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: Return the input file arguments as a dictionary without additional information beyond the command line arguments.

Source code in spectrafit/plugins/file_converter.py
Python
def get_args(self) -> Dict[str, Any]:
    """Get the arguments from the command line.

    Returns:
        Dict[str, Any]: Return the input file arguments as a dictionary without
            additional information beyond the command line arguments.
    """
    parser = argparse.ArgumentParser(
        description="Converter for 'SpectraFit' input and output files."
    )
    parser.add_argument(
        "infile",
        type=Path,
        help="Filename of the 'SpectraFit' input or output file.",
    )
    parser.add_argument(
        "-f",
        "--file-format",
        help="File format for the conversion.",
        type=str,
        choices=choices,
    )
    parser.add_argument(
        "-e",
        "--export-format",
        help="File format for the export.",
        type=str,
        default="json",
        choices=choices,
    )
    return vars(parser.parse_args())

save(data, fname, export_format)

Save the converted file.

Raises:

Type Description
ValueError

If the input file format is identical with the output format.

ValueError

If the output file format is not supported.

Parameters:

Name Type Description Default
data Any

The converted file as a dictionary.

required
fname Path

The input file as a path object.

required
export_format str

The output file format.

required
Source code in spectrafit/plugins/file_converter.py
Python
def save(self, data: Any, fname: Path, export_format: str) -> None:
    """Save the converted file.

    Raises:
        ValueError: If the input file format is identical with the output format.
        ValueError: If the output file format is not supported.

    Args:
        data (Any): The converted file as a dictionary.
        fname (Path): The input file as a path object.
        export_format (str): The output file format.
    """
    if fname.suffix[1:] == export_format:
        raise ValueError(
            f"The input file suffix '{fname.suffix[1:]}' is similar to the"
            f" output file format '{export_format}'."
            "Please use a different output file suffix."
        )

    if export_format not in choices:
        raise ValueError(
            f"The output file format '{export_format}' is not supported."
        )

    if export_format == "json":
        with open(
            fname.with_suffix(f".{export_format}"), "w", encoding="utf-8"
        ) as f:
            json.dump(data, f, indent=4)
    elif export_format in {"yaml", "yml"}:
        with open(
            fname.with_suffix(f".{export_format}"), "w", encoding="utf-8"
        ) as f:
            yaml.dump(data, f, default_flow_style=False)
    elif export_format in {"toml", "lock"}:
        with open(
            fname.with_suffix(f".{export_format}"),
            "wb+",
        ) as f:
            tomli_w.dump(dict(**data), f)

command_line_runner()

Run the converter from the command line.

Source code in spectrafit/plugins/file_converter.py
Python
def command_line_runner() -> None:
    """Run the converter from the command line."""
    FileConverter()()

Data Converter for rational data formats like CSV, Excel, etc.

Transform the input data to a CSV file.

DataConverter

Bases: Converter

Convert the data files to a CSV file.

Supported file formats

Currently supported file formats:

-[x] ATHENA -[x] TXT -[ ] more to come

DataConverter class can be also used in the Jupyter notebook.

Source code in spectrafit/plugins/data_converter.py
Python
class DataConverter(Converter):
    """Convert the data files to a CSV file.

    !!! info "Supported file formats"

        Currently supported file formats:

        -[x] ATHENA
        -[x] TXT
        -[ ] more to come

        `DataConverter` class can be also used in the Jupyter notebook.
    """

    def get_args(self) -> Dict[str, Any]:
        """Get the arguments from the command line.

        Returns:
            Dict[str, Any]: Return the input file arguments as a dictionary without
                additional information beyond the command line arguments.
        """
        parser = argparse.ArgumentParser(
            description="Converter for 'SpectraFit' from data files to CSV files."
        )
        parser.add_argument(
            "infile",
            type=Path,
            help="Filename of the data file to convert.",
        )
        parser.add_argument(
            "-f",
            "--file-format",
            help="File format for the conversion.",
            type=str,
            choices=choices,
        )
        parser.add_argument(
            "-e",
            "--export-format",
            help="File format for the export.",
            type=str,
            default="csv",
            choices=choices_export,
        )
        return vars(parser.parse_args())

    @staticmethod
    def convert(infile: Path, file_format: str) -> pd.DataFrame:
        """Convert the input file to the target file format.

        Args:
            infile (Path): Input file as a path object.
            file_format (str): Target file format.

        Raises:
            ValueError: If the file format is not supported.

        Returns:
            pd.DataFrame: The converted data as a pandas DataFrame.
        """
        if file_format.upper() not in choices:
            raise ValueError(f"File format '{file_format}' is not supported.")

        if callable(DataFormats.__dict__[file_format].names):
            names = DataFormats.__dict__[file_format].names(infile)
        else:
            names = DataFormats.__dict__[file_format].names
        DataFormats.__dict__[file_format].names = names

        return pd.read_csv(
            infile, **DataFormats.__dict__[file_format].dict(exclude={"file_suffixes"})
        )

    def save(self, data: Any, fname: Path, export_format: str) -> None:
        """Save the converted data to a CSV file.

        Raises:
            ValueError: If the export format is not supported.

        Args:
            data (Any): The converted data, which is a pandas DataFrame.
            fname (Path): The file name of the data file.
            export_format (str): The file format of the exported file.
        """
        if export_format.lower() not in choices_export:
            raise ValueError(f"Export format '{export_format}' is not supported.")
        data.to_csv(fname.with_suffix(f".{export_format}"), index=False)

    def __call__(self) -> None:
        """Run the converter."""
        args = self.get_args()
        self.save(
            data=self.convert(
                args["infile"],
                args["file_format"],
            ),
            fname=args["infile"],
            export_format=args["export_format"],
        )

__call__()

Run the converter.

Source code in spectrafit/plugins/data_converter.py
Python
def __call__(self) -> None:
    """Run the converter."""
    args = self.get_args()
    self.save(
        data=self.convert(
            args["infile"],
            args["file_format"],
        ),
        fname=args["infile"],
        export_format=args["export_format"],
    )

convert(infile, file_format) staticmethod

Convert the input file to the target file format.

Parameters:

Name Type Description Default
infile Path

Input file as a path object.

required
file_format str

Target file format.

required

Raises:

Type Description
ValueError

If the file format is not supported.

Returns:

Type Description
pd.DataFrame

pd.DataFrame: The converted data as a pandas DataFrame.

Source code in spectrafit/plugins/data_converter.py
Python
@staticmethod
def convert(infile: Path, file_format: str) -> pd.DataFrame:
    """Convert the input file to the target file format.

    Args:
        infile (Path): Input file as a path object.
        file_format (str): Target file format.

    Raises:
        ValueError: If the file format is not supported.

    Returns:
        pd.DataFrame: The converted data as a pandas DataFrame.
    """
    if file_format.upper() not in choices:
        raise ValueError(f"File format '{file_format}' is not supported.")

    if callable(DataFormats.__dict__[file_format].names):
        names = DataFormats.__dict__[file_format].names(infile)
    else:
        names = DataFormats.__dict__[file_format].names
    DataFormats.__dict__[file_format].names = names

    return pd.read_csv(
        infile, **DataFormats.__dict__[file_format].dict(exclude={"file_suffixes"})
    )

get_args()

Get the arguments from the command line.

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: Return the input file arguments as a dictionary without additional information beyond the command line arguments.

Source code in spectrafit/plugins/data_converter.py
Python
def get_args(self) -> Dict[str, Any]:
    """Get the arguments from the command line.

    Returns:
        Dict[str, Any]: Return the input file arguments as a dictionary without
            additional information beyond the command line arguments.
    """
    parser = argparse.ArgumentParser(
        description="Converter for 'SpectraFit' from data files to CSV files."
    )
    parser.add_argument(
        "infile",
        type=Path,
        help="Filename of the data file to convert.",
    )
    parser.add_argument(
        "-f",
        "--file-format",
        help="File format for the conversion.",
        type=str,
        choices=choices,
    )
    parser.add_argument(
        "-e",
        "--export-format",
        help="File format for the export.",
        type=str,
        default="csv",
        choices=choices_export,
    )
    return vars(parser.parse_args())

save(data, fname, export_format)

Save the converted data to a CSV file.

Raises:

Type Description
ValueError

If the export format is not supported.

Parameters:

Name Type Description Default
data Any

The converted data, which is a pandas DataFrame.

required
fname Path

The file name of the data file.

required
export_format str

The file format of the exported file.

required
Source code in spectrafit/plugins/data_converter.py
Python
def save(self, data: Any, fname: Path, export_format: str) -> None:
    """Save the converted data to a CSV file.

    Raises:
        ValueError: If the export format is not supported.

    Args:
        data (Any): The converted data, which is a pandas DataFrame.
        fname (Path): The file name of the data file.
        export_format (str): The file format of the exported file.
    """
    if export_format.lower() not in choices_export:
        raise ValueError(f"Export format '{export_format}' is not supported.")
    data.to_csv(fname.with_suffix(f".{export_format}"), index=False)

DataFormats dataclass

Data formats.

Source code in spectrafit/plugins/data_converter.py
Python
@dataclass(frozen=True)
class DataFormats:
    """Data formats."""

    ATHENA = athena_format
    TXT = txt_format

command_line_runner()

Run the converter from the command line.

Source code in spectrafit/plugins/data_converter.py
Python
def command_line_runner() -> None:
    """Run the converter from the command line."""
    DataConverter()()

get_athena_column(fname, comment='#')

Get the header of the file.

Parameters:

Name Type Description Default
fname Path

The file name of the data file.

required
comment str

The comment marker. Defaults to "#".

'#'

Returns:

Type Description
Optional[List[str]]

Optional[List[str]]: The column names of the data file as a list.

Source code in spectrafit/plugins/data_converter.py
Python
def get_athena_column(fname: Path, comment: str = "#") -> Optional[List[str]]:
    """Get the header of the file.

    Args:
        fname (Path): The file name of the data file.
        comment (str, optional): The comment marker. Defaults to "#".

    Returns:
        Optional[List[str]]: The column names of the data file as a list.

    """
    with open(fname, encoding="utf-8") as f:
        data = f.read()
        lines = data.splitlines()
        return next(
            (
                lines[i - 1].split(comment)[-1].split()
                for i, line in enumerate(lines)
                if re.match(r"^\s*\d", line)
            ),
            None,
        )

Pkl Converter for pickle files

Transform the raw pkl data into a CSV files.

ExportData

Export the data to a file.

General information

The data is exported to a file. The file format is determined by the file extension of the output file. The supported file formats are:

-[x] npy -[x] npz -[x] pkl -[x] pkl.gz

Classical file formats like CSV, JSON, TOML, etc. are not supported. In case of CSV, the conversion from unstructured data to a structured format is not trivial. In case of JSON and TOML, the data is not the conversion from numpy arrays to lists is very costly. Therefore, the data is exported to a pickly file as the preferred format.

About NumPy

The data is exported to a NumPy file can cause some challenge for the loading of the data. The data is exported as a dictionary with numpy as numpy arrays. The data can be loaded with the following code:

Python
import numpy as np

data = np.load("data.npy", allow_pickle=True).item()
Source code in spectrafit/plugins/pkl_converter.py
Python
class ExportData:
    """Export the data to a file.

    !!! info "General information"

        The data is exported to a file. The file format is determined by the file
        extension of the output file. The supported file formats are:

        -[x] npy
        -[x] npz
        -[x] pkl
        -[x] pkl.gz

        Classical file formats like `CSV`, `JSON`, `TOML`, etc. are not supported.
        In case of `CSV`, the conversion from unstructured data to a structured
        format is not trivial. In case of `JSON` and `TOML`, the data is not
        the conversion from numpy arrays to lists is very costly. Therefore, the
        data is exported to a pickly file as the preferred format.

    !!! warning "About NumPy"

        The data is exported to a NumPy file can cause some challenge for the
        loading of the data. The data is exported as a dictionary with numpy
        as numpy arrays. The data can be loaded with the following code:

        ```python
        import numpy as np

        data = np.load("data.npy", allow_pickle=True).item()
        ```
    """

    def __init__(self, data: Dict[str, Any], fname: Path, export_format: str) -> None:
        """Export the data to a file.

        Args:
            data (Dict[str, Any]): The data to export.
            fname (Path): The filename of the output file.
            export_format (str): The file format of the output file.
        """
        self.data = data
        self.fname = fname.with_suffix(f".{export_format}")
        self.export_format = export_format

    def __call__(self) -> None:
        """Export the data to a file."""
        if self.export_format in {"npy", "npz"}:
            self.to_numpy()
        elif self.export_format in {"pkl", pkl_gz}:
            self.to_pickle()

    def to_numpy(self) -> None:
        """Export the data to a numpy file."""
        _data: Any = self.data
        if self.export_format.lower() == "npy":
            np.save(self.fname, _data)
        elif self.export_format.lower() == "npz":
            np.savez(self.fname, data=_data)

    def to_pickle(self) -> None:
        """Export the data to a pickle file."""
        if self.export_format.lower() == "pkl":
            with open(self.fname, "wb") as f:
                pickle.dump(self.data, f)
        elif self.export_format.lower() == pkl_gz:
            with gzip.open(self.fname, "wb") as f:
                pickle.dump(self.data, f)

    @staticmethod
    def numpy2list(data: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
        """Convert the arrays of list dictionaries to a list of dictionaries with list.

        Args:
            data (List[Dict[str, Any]]): The data to convert.

        Returns:
            List[Dict[str, Any]]: The converted data.
        """
        return [
            {k: v.tolist() for k, v in d.items() if isinstance(v, np.ndarray)}
            for d in data
        ]

__call__()

Export the data to a file.

Source code in spectrafit/plugins/pkl_converter.py
Python
def __call__(self) -> None:
    """Export the data to a file."""
    if self.export_format in {"npy", "npz"}:
        self.to_numpy()
    elif self.export_format in {"pkl", pkl_gz}:
        self.to_pickle()

__init__(data, fname, export_format)

Export the data to a file.

Parameters:

Name Type Description Default
data Dict[str, Any]

The data to export.

required
fname Path

The filename of the output file.

required
export_format str

The file format of the output file.

required
Source code in spectrafit/plugins/pkl_converter.py
Python
def __init__(self, data: Dict[str, Any], fname: Path, export_format: str) -> None:
    """Export the data to a file.

    Args:
        data (Dict[str, Any]): The data to export.
        fname (Path): The filename of the output file.
        export_format (str): The file format of the output file.
    """
    self.data = data
    self.fname = fname.with_suffix(f".{export_format}")
    self.export_format = export_format

numpy2list(data) staticmethod

Convert the arrays of list dictionaries to a list of dictionaries with list.

Parameters:

Name Type Description Default
data List[Dict[str, Any]]

The data to convert.

required

Returns:

Type Description
List[Dict[str, Any]]

List[Dict[str, Any]]: The converted data.

Source code in spectrafit/plugins/pkl_converter.py
Python
@staticmethod
def numpy2list(data: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
    """Convert the arrays of list dictionaries to a list of dictionaries with list.

    Args:
        data (List[Dict[str, Any]]): The data to convert.

    Returns:
        List[Dict[str, Any]]: The converted data.
    """
    return [
        {k: v.tolist() for k, v in d.items() if isinstance(v, np.ndarray)}
        for d in data
    ]

to_numpy()

Export the data to a numpy file.

Source code in spectrafit/plugins/pkl_converter.py
Python
def to_numpy(self) -> None:
    """Export the data to a numpy file."""
    _data: Any = self.data
    if self.export_format.lower() == "npy":
        np.save(self.fname, _data)
    elif self.export_format.lower() == "npz":
        np.savez(self.fname, data=_data)

to_pickle()

Export the data to a pickle file.

Source code in spectrafit/plugins/pkl_converter.py
Python
def to_pickle(self) -> None:
    """Export the data to a pickle file."""
    if self.export_format.lower() == "pkl":
        with open(self.fname, "wb") as f:
            pickle.dump(self.data, f)
    elif self.export_format.lower() == pkl_gz:
        with gzip.open(self.fname, "wb") as f:
            pickle.dump(self.data, f)

PklConverter

Bases: Converter

Convert pkl data to a CSV files.

General information

The pkl data is converted to a CSV file. The CSV file is saved in the same directory as the input file. The name of the CSV file is the same as the input file with the suffix .csv and prefixed with the name of the 'major' keys in the pkl file. Furthermore, a graph of the data is optionally saved as a PDF file to have a visual representation of the data structure.

Supported file formats

Currently supported file formats:

-[x] pkl -[x] pkl.gz -[x] ...

Source code in spectrafit/plugins/pkl_converter.py
Python
class PklConverter(Converter):
    """Convert pkl data to a CSV files.

    !!! info "General information"

        The pkl data is converted to a CSV file. The CSV file is saved in the same
        directory as the input file. The name of the CSV file is the same as the
        input file with the suffix `.csv` and prefixed with the name of the
        'major' keys in the pkl file. Furthermore, a graph of the data is optionally
        saved as a PDF file to have a visual representation of the data structure.

    !!! info "Supported file formats"

        Currently supported file formats:

        -[x] pkl
        -[x] pkl.gz
        -[x] ...

    """

    def get_args(self) -> Dict[str, Any]:
        """Get the arguments from the command line.

        Returns:
            Dict[str, Any]: Return the input file arguments as a dictionary without
                additional information beyond the command line arguments.
        """
        parser = argparse.ArgumentParser(
            description="Converter for 'SpectraFit' from pkl files to CSV files."
        )
        parser.add_argument(
            "infile",
            type=Path,
            help="Filename of the pkl file to convert.",
        )
        parser.add_argument(
            "-f",
            "--file-format",
            help="File format for the optional encoding of the pickle file."
            " Default is 'latin1'.",
            type=str,
            default="latin1",
            choices=choices_fformat,
        )
        parser.add_argument(
            "-e",
            "--export-format",
            help="File format for export of the output file. Default is 'pkl'.",
            type=str,
            default="pkl",
            choices=choices_export,
        )
        return vars(parser.parse_args())

    @staticmethod
    def convert(infile: Path, file_format: str) -> Dict[str, Any]:
        """Convert the input file to the output file.

        Args:
            infile (Path): The input file of the as a path object.
            file_format (str): The output file format.

        Returns:
            Dict[str, Any]: The data as a dictionary, which can be a nested dictionary
        """

        def _convert(
            data_values: Dict[str, Any], _key: Optional[List[str]] = None
        ) -> List[Dict[str, Any]]:
            """Convert the data to a list of dictionaries.

            The new key is the old key plus all the subkeys. The new value is the
            value of the subkey if the value is an instance of an array.

            For avoiding `pylint` errors, the `_key` argument is set to `None` by
            default and is set to an empty list if it is `None`. This is done to
            avoid the `pylint` error `dangerous-default-value`. The `_key` argument
            is used to keep track of the keys of the nested dictionary. Furthermore,
            the `_key` argument is used to create the new key for the new dictionary.

            Finally, the new dictionary is appended to the list of dictionaries.

            Args:
                data_values (Dict[str, Any]): The data as a dictionary.

            Returns:
                List[Dict[str, Any]]: The data as a list of dictionaries.
            """
            data_list = []
            if _key is None:
                _key = []
            for key, value in data_values.items():
                if isinstance(value, dict):
                    _key.append(str(key))
                    data_list.extend(_convert(value, _key))
                    _key.pop()
                elif isinstance(value, np.ndarray):
                    data_list.append({"_".join(_key + [key]): value})
            return data_list

        data_dict = {}
        for key, value in pkl2any(infile, file_format).items():
            if isinstance(value, dict):
                data_dict[key] = _convert(value)
        return data_dict

    def save(self, data: Any, fname: Path, export_format: str) -> None:
        """Save the converted pickle data to a file.

        Args:
            data (Any): The converted nested dictionary of the pkl data.
            fname (Path): The filename of the output file.
            export_format (str): The file format of the output file.

        Raises:
            ValueError: If the export format is not supported.
        """
        if export_format.lower() not in choices_export:
            raise ValueError(f"Unsupported file format '{export_format}'.")

        fname = pure_fname(fname)

        for key, value in data.items():
            _fname = Path(f"{fname}_{key}").with_suffix(f".{export_format}")
            ExportData(data=value, fname=_fname, export_format=export_format)()

    def __call__(self) -> None:
        """Run the converter."""
        args = self.get_args()
        data = self.convert(args["infile"], args["file_format"])
        self.save(data, args["infile"], args["export_format"])

__call__()

Run the converter.

Source code in spectrafit/plugins/pkl_converter.py
Python
def __call__(self) -> None:
    """Run the converter."""
    args = self.get_args()
    data = self.convert(args["infile"], args["file_format"])
    self.save(data, args["infile"], args["export_format"])

convert(infile, file_format) staticmethod

Convert the input file to the output file.

Parameters:

Name Type Description Default
infile Path

The input file of the as a path object.

required
file_format str

The output file format.

required

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: The data as a dictionary, which can be a nested dictionary

Source code in spectrafit/plugins/pkl_converter.py
Python
@staticmethod
def convert(infile: Path, file_format: str) -> Dict[str, Any]:
    """Convert the input file to the output file.

    Args:
        infile (Path): The input file of the as a path object.
        file_format (str): The output file format.

    Returns:
        Dict[str, Any]: The data as a dictionary, which can be a nested dictionary
    """

    def _convert(
        data_values: Dict[str, Any], _key: Optional[List[str]] = None
    ) -> List[Dict[str, Any]]:
        """Convert the data to a list of dictionaries.

        The new key is the old key plus all the subkeys. The new value is the
        value of the subkey if the value is an instance of an array.

        For avoiding `pylint` errors, the `_key` argument is set to `None` by
        default and is set to an empty list if it is `None`. This is done to
        avoid the `pylint` error `dangerous-default-value`. The `_key` argument
        is used to keep track of the keys of the nested dictionary. Furthermore,
        the `_key` argument is used to create the new key for the new dictionary.

        Finally, the new dictionary is appended to the list of dictionaries.

        Args:
            data_values (Dict[str, Any]): The data as a dictionary.

        Returns:
            List[Dict[str, Any]]: The data as a list of dictionaries.
        """
        data_list = []
        if _key is None:
            _key = []
        for key, value in data_values.items():
            if isinstance(value, dict):
                _key.append(str(key))
                data_list.extend(_convert(value, _key))
                _key.pop()
            elif isinstance(value, np.ndarray):
                data_list.append({"_".join(_key + [key]): value})
        return data_list

    data_dict = {}
    for key, value in pkl2any(infile, file_format).items():
        if isinstance(value, dict):
            data_dict[key] = _convert(value)
    return data_dict

get_args()

Get the arguments from the command line.

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: Return the input file arguments as a dictionary without additional information beyond the command line arguments.

Source code in spectrafit/plugins/pkl_converter.py
Python
def get_args(self) -> Dict[str, Any]:
    """Get the arguments from the command line.

    Returns:
        Dict[str, Any]: Return the input file arguments as a dictionary without
            additional information beyond the command line arguments.
    """
    parser = argparse.ArgumentParser(
        description="Converter for 'SpectraFit' from pkl files to CSV files."
    )
    parser.add_argument(
        "infile",
        type=Path,
        help="Filename of the pkl file to convert.",
    )
    parser.add_argument(
        "-f",
        "--file-format",
        help="File format for the optional encoding of the pickle file."
        " Default is 'latin1'.",
        type=str,
        default="latin1",
        choices=choices_fformat,
    )
    parser.add_argument(
        "-e",
        "--export-format",
        help="File format for export of the output file. Default is 'pkl'.",
        type=str,
        default="pkl",
        choices=choices_export,
    )
    return vars(parser.parse_args())

save(data, fname, export_format)

Save the converted pickle data to a file.

Parameters:

Name Type Description Default
data Any

The converted nested dictionary of the pkl data.

required
fname Path

The filename of the output file.

required
export_format str

The file format of the output file.

required

Raises:

Type Description
ValueError

If the export format is not supported.

Source code in spectrafit/plugins/pkl_converter.py
Python
def save(self, data: Any, fname: Path, export_format: str) -> None:
    """Save the converted pickle data to a file.

    Args:
        data (Any): The converted nested dictionary of the pkl data.
        fname (Path): The filename of the output file.
        export_format (str): The file format of the output file.

    Raises:
        ValueError: If the export format is not supported.
    """
    if export_format.lower() not in choices_export:
        raise ValueError(f"Unsupported file format '{export_format}'.")

    fname = pure_fname(fname)

    for key, value in data.items():
        _fname = Path(f"{fname}_{key}").with_suffix(f".{export_format}")
        ExportData(data=value, fname=_fname, export_format=export_format)()

command_line_runner()

Run the command line script.

Source code in spectrafit/plugins/pkl_converter.py
Python
def command_line_runner() -> None:
    """Run the command line script."""
    PklConverter()()

About pickle file and the PklVisualizer

In addition to exploring the nested structure of the Python's pickle file, the PklVisualizer provides two methods to visualize the data:

  1. As graph via networkx and matplotlib
  2. As json file with used types

Visualize the pkl file as a graph.

PklVisualizer

Bases: Converter

Visualize the pkl data as a graph.

Source code in spectrafit/plugins/pkl_visualizer.py
Python
class PklVisualizer(Converter):
    """Visualize the pkl data as a graph."""

    def get_args(self) -> Dict[str, Any]:
        """Get the arguments from the command line.

        Returns:
            Dict[str, Any]: Return the input file arguments as a dictionary without
                additional information beyond the command line arguments.
        """
        parser = argparse.ArgumentParser(
            description="Converter for 'SpectraFit' from pkl files to a graph."
        )
        parser.add_argument(
            "infile",
            type=Path,
            help="Filename of the pkl file to convert to graph.",
        )
        parser.add_argument(
            "-f",
            "--file-format",
            help="File format for the optional encoding of the pickle file."
            " Default is 'latin1'.",
            type=str,
            default="latin1",
            choices=choices_fformat,
        )
        parser.add_argument(
            "-e",
            "--export-format",
            help="File extension for the graph export.",
            type=str,
            default="pdf",
            choices=choices_export,
        )

        return vars(parser.parse_args())

    @staticmethod
    def convert(infile: Path, file_format: str) -> Dict[str, Any]:
        """Convert the input file to the output file.

        Args:
            infile (Path): The input file of the as a path object.
            file_format (str): The encoding of the pickle file.

        Raises:
            ValueError: If the data is not a dictionary.

        Returns:
            Dict[str, Any]: The data as a dictionary, which can be a nested dictionary.
        """
        data = PklVisualizer().get_type(pkl2any(infile, encoding=file_format))
        if not isinstance(data, dict):
            raise ValueError(f"Data is not a dictionary: {data}")
        graph = PklVisualizer().create_graph(fname=infile, data_dict=data)

        pos = nx.kamada_kawai_layout(graph, scale=2)
        nx.draw_networkx_nodes(
            graph, pos, node_size=100, node_color="lightblue", alpha=0.8
        )
        nx.draw_networkx_edges(graph, pos, width=0.5, edge_color="grey", alpha=0.5)
        nx.draw_networkx_labels(graph, pos, font_size=10, font_family="sans-serif")
        plt.axis("off")
        return data

    def save(self, data: Any, fname: Path, export_format: str) -> None:
        """Save the graph to a file and the data and their types to a json file.

        Args:
            data (Any): The data to save, which can be a nested dictionary.
            fname (Path): The filename of the file to save.
            export_format (str): The file format to save the graph to.

        Raises:
            ValueError: If the export format is not supported.
        """
        if export_format.lower() not in choices_export:
            raise ValueError(f"Export format '{export_format}' is not supported.")

        plt.savefig(
            pure_fname(fname).with_suffix(f".{export_format}"),
            format=export_format,
        )

        with open(
            pure_fname(fname).with_suffix(".json"), "w+", encoding="utf-8"
        ) as outfile:
            json.dump(data, outfile, indent=4)

    def get_type(self, value: Any) -> Union[Dict[str, Any], str]:
        """Get the type of the value.

        Args:
            value (Any): The value to get the type from.

        Returns:
            Union[Dict[str, Any], str]: The type of the value.
        """
        if isinstance(value, dict):
            return {key: self.get_type(value) for key, value in value.items()}
        if isinstance(value, np.ndarray):
            return f"{type(value)} of shape {value.shape}"
        return str(type(value))

    def add_nodes(self, graph: nx.DiGraph, data_dict: Dict[str, Any]) -> None:
        """Add nodes to the graph.

        Args:
            graph (nx.DiGraph): The graph to add nodes to.
            data_dict (Dict[str, Any]): The data dictionary to get the nodes from.
        """
        for key, value in data_dict.items():
            graph.add_node(key)
            if isinstance(value, dict):
                for item in value:
                    graph.add_edge(key, item)
                    graph.add_node(item)
                self.add_nodes(graph=graph, data_dict=value)
            elif "of shape" in str(value):
                value = value.split("of shape")
                graph.add_node(value[0])
                graph.add_edge(key, value[0])

                graph.add_node(value[-1])
                graph.add_edge(value[0], value[-1])

            else:
                graph.add_node(value)
                graph.add_edge(key, value)

    def create_graph(self, fname: Path, data_dict: Dict[str, Any]) -> nx.DiGraph:
        """Create the graph.

        Args:
            fname (Path): The filename of the file to create the graph from.
            data_dict (Dict[str, Any]): The data dictionary to create the graph from.

        Returns:
            nx.DiGraph: The graph created from the data dictionary.
        """
        graph = nx.DiGraph()
        graph.add_node(str(fname.name))
        for key in data_dict:
            graph.add_edge(str(fname.name), key)

        self.add_nodes(graph=graph, data_dict=data_dict)
        return graph

    def __call__(self) -> None:
        """Create the graph and save it as a PDF file."""
        args = self.get_args()
        self.save(
            data=self.convert(args["infile"], args["file_format"]),
            fname=args["infile"],
            export_format=args["export_format"],
        )
        plt.show()

__call__()

Create the graph and save it as a PDF file.

Source code in spectrafit/plugins/pkl_visualizer.py
Python
def __call__(self) -> None:
    """Create the graph and save it as a PDF file."""
    args = self.get_args()
    self.save(
        data=self.convert(args["infile"], args["file_format"]),
        fname=args["infile"],
        export_format=args["export_format"],
    )
    plt.show()

add_nodes(graph, data_dict)

Add nodes to the graph.

Parameters:

Name Type Description Default
graph nx.DiGraph

The graph to add nodes to.

required
data_dict Dict[str, Any]

The data dictionary to get the nodes from.

required
Source code in spectrafit/plugins/pkl_visualizer.py
Python
def add_nodes(self, graph: nx.DiGraph, data_dict: Dict[str, Any]) -> None:
    """Add nodes to the graph.

    Args:
        graph (nx.DiGraph): The graph to add nodes to.
        data_dict (Dict[str, Any]): The data dictionary to get the nodes from.
    """
    for key, value in data_dict.items():
        graph.add_node(key)
        if isinstance(value, dict):
            for item in value:
                graph.add_edge(key, item)
                graph.add_node(item)
            self.add_nodes(graph=graph, data_dict=value)
        elif "of shape" in str(value):
            value = value.split("of shape")
            graph.add_node(value[0])
            graph.add_edge(key, value[0])

            graph.add_node(value[-1])
            graph.add_edge(value[0], value[-1])

        else:
            graph.add_node(value)
            graph.add_edge(key, value)

convert(infile, file_format) staticmethod

Convert the input file to the output file.

Parameters:

Name Type Description Default
infile Path

The input file of the as a path object.

required
file_format str

The encoding of the pickle file.

required

Raises:

Type Description
ValueError

If the data is not a dictionary.

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: The data as a dictionary, which can be a nested dictionary.

Source code in spectrafit/plugins/pkl_visualizer.py
Python
@staticmethod
def convert(infile: Path, file_format: str) -> Dict[str, Any]:
    """Convert the input file to the output file.

    Args:
        infile (Path): The input file of the as a path object.
        file_format (str): The encoding of the pickle file.

    Raises:
        ValueError: If the data is not a dictionary.

    Returns:
        Dict[str, Any]: The data as a dictionary, which can be a nested dictionary.
    """
    data = PklVisualizer().get_type(pkl2any(infile, encoding=file_format))
    if not isinstance(data, dict):
        raise ValueError(f"Data is not a dictionary: {data}")
    graph = PklVisualizer().create_graph(fname=infile, data_dict=data)

    pos = nx.kamada_kawai_layout(graph, scale=2)
    nx.draw_networkx_nodes(
        graph, pos, node_size=100, node_color="lightblue", alpha=0.8
    )
    nx.draw_networkx_edges(graph, pos, width=0.5, edge_color="grey", alpha=0.5)
    nx.draw_networkx_labels(graph, pos, font_size=10, font_family="sans-serif")
    plt.axis("off")
    return data

create_graph(fname, data_dict)

Create the graph.

Parameters:

Name Type Description Default
fname Path

The filename of the file to create the graph from.

required
data_dict Dict[str, Any]

The data dictionary to create the graph from.

required

Returns:

Type Description
nx.DiGraph

nx.DiGraph: The graph created from the data dictionary.

Source code in spectrafit/plugins/pkl_visualizer.py
Python
def create_graph(self, fname: Path, data_dict: Dict[str, Any]) -> nx.DiGraph:
    """Create the graph.

    Args:
        fname (Path): The filename of the file to create the graph from.
        data_dict (Dict[str, Any]): The data dictionary to create the graph from.

    Returns:
        nx.DiGraph: The graph created from the data dictionary.
    """
    graph = nx.DiGraph()
    graph.add_node(str(fname.name))
    for key in data_dict:
        graph.add_edge(str(fname.name), key)

    self.add_nodes(graph=graph, data_dict=data_dict)
    return graph

get_args()

Get the arguments from the command line.

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: Return the input file arguments as a dictionary without additional information beyond the command line arguments.

Source code in spectrafit/plugins/pkl_visualizer.py
Python
def get_args(self) -> Dict[str, Any]:
    """Get the arguments from the command line.

    Returns:
        Dict[str, Any]: Return the input file arguments as a dictionary without
            additional information beyond the command line arguments.
    """
    parser = argparse.ArgumentParser(
        description="Converter for 'SpectraFit' from pkl files to a graph."
    )
    parser.add_argument(
        "infile",
        type=Path,
        help="Filename of the pkl file to convert to graph.",
    )
    parser.add_argument(
        "-f",
        "--file-format",
        help="File format for the optional encoding of the pickle file."
        " Default is 'latin1'.",
        type=str,
        default="latin1",
        choices=choices_fformat,
    )
    parser.add_argument(
        "-e",
        "--export-format",
        help="File extension for the graph export.",
        type=str,
        default="pdf",
        choices=choices_export,
    )

    return vars(parser.parse_args())

get_type(value)

Get the type of the value.

Parameters:

Name Type Description Default
value Any

The value to get the type from.

required

Returns:

Type Description
Union[Dict[str, Any], str]

Union[Dict[str, Any], str]: The type of the value.

Source code in spectrafit/plugins/pkl_visualizer.py
Python
def get_type(self, value: Any) -> Union[Dict[str, Any], str]:
    """Get the type of the value.

    Args:
        value (Any): The value to get the type from.

    Returns:
        Union[Dict[str, Any], str]: The type of the value.
    """
    if isinstance(value, dict):
        return {key: self.get_type(value) for key, value in value.items()}
    if isinstance(value, np.ndarray):
        return f"{type(value)} of shape {value.shape}"
    return str(type(value))

save(data, fname, export_format)

Save the graph to a file and the data and their types to a json file.

Parameters:

Name Type Description Default
data Any

The data to save, which can be a nested dictionary.

required
fname Path

The filename of the file to save.

required
export_format str

The file format to save the graph to.

required

Raises:

Type Description
ValueError

If the export format is not supported.

Source code in spectrafit/plugins/pkl_visualizer.py
Python
def save(self, data: Any, fname: Path, export_format: str) -> None:
    """Save the graph to a file and the data and their types to a json file.

    Args:
        data (Any): The data to save, which can be a nested dictionary.
        fname (Path): The filename of the file to save.
        export_format (str): The file format to save the graph to.

    Raises:
        ValueError: If the export format is not supported.
    """
    if export_format.lower() not in choices_export:
        raise ValueError(f"Export format '{export_format}' is not supported.")

    plt.savefig(
        pure_fname(fname).with_suffix(f".{export_format}"),
        format=export_format,
    )

    with open(
        pure_fname(fname).with_suffix(".json"), "w+", encoding="utf-8"
    ) as outfile:
        json.dump(data, outfile, indent=4)

command_line_runner()

Run the converter from the command line.

Source code in spectrafit/plugins/pkl_visualizer.py
Python
def command_line_runner() -> None:
    """Run the converter from the command line."""
    PklVisualizer()()

RIXS Converter for RIXS data

Transform the raw pkl data into a JSON, TOML, or numpy file for RIXS.

RIXSConverter

Bases: Converter

Transform the raw pkl data into a JSON, TOML, or numpy file for RIXS.

Source code in spectrafit/plugins/rixs_converter.py
Python
class RIXSConverter(Converter):
    """Transform the raw pkl data into a JSON, TOML, or numpy file for RIXS."""

    def get_args(self) -> Dict[str, Any]:
        """Get the arguments from the command line.

        Returns:
            Dict[str, Any]: Return the input file arguments as a dictionary without
                additional information beyond the command line arguments.
        """
        parser = argparse.ArgumentParser(
            description="Converter for 'SpectraFit' from pkl files to a JSON, TOML, "
            "or numpy file for RIXS-Visualizer."
        )
        parser.add_argument(
            "infile",
            type=Path,
            help="Filename of the pkl file to convert to JSON, TOML, or numpy.",
        )
        parser.add_argument(
            "-f",
            "--file-format",
            help="File format for the optional encoding of the pickle file."
            " Default is 'latin1'.",
            type=str,
            default="latin1",
            choices=choices_fformat,
        )
        parser.add_argument(
            "-e",
            "--export-format",
            help="File extension for the export.",
            type=str,
            default="json",
            choices=choices_export,
        )
        parser.add_argument(
            "-ie",
            "--incident_energy",
            help="Name of the incident energy",
            type=str,
        )
        parser.add_argument(
            "-ee",
            "--emission_energy",
            help="Name of the emitted energy",
            type=str,
        )
        parser.add_argument(
            "-rm",
            "--rixs_map",
            help="Name of the RIXS map",
        )
        parser.add_argument(
            "-m",
            "--mode",
            help="Mode of the RIXS map post-processing, e.g. 'sum' or 'max'."
            "Default is 'sum'.",
            type=str,
            default="sum",
            choices=choices_mode,
        )
        return vars(parser.parse_args())

    @staticmethod
    def convert(infile: Path, file_format: str) -> MutableMapping[str, Any]:
        """Convert the pkl file to a dictionary.

        Args:
            infile (Path): The input file.
            file_format (str): The file format for the optional encoding of the pickle
                file.

        Returns:
            MutableMapping[str, Any]: The data dictionary from the pkl file.
        """
        data_dict = {}
        for _dict in pkl2any(infile, file_format):
            data_dict.update(_dict)
        return data_dict

    def create_rixs(
        self,
        data: MutableMapping[str, Any],
        incident_energy: str,
        emission_energy: str,
        rixs_map: str,
        mode: str,
    ) -> RIXSModelAPI:
        """Create the RIXS map from the pkl file.

        Args:
            data (MutableMapping[str, Any]): The data dictionary from the pkl file.
            incident_energy (str): The name of the incident energy.
            emission_energy (str): The name of the emitted energy.
            rixs_map (str): The name of the RIXS map.
            mode (str): The mode of the RIXS map post-processing, e.g. 'sum' or 'max'.

        Raises:
            ValueError: If the mode is not in the choices.
            KeyError: If the incident energy, emission energy, or RIXS map is not in
                the data.

        Returns:
            RIXSModelAPI: The RIXS map as a RIXSModelAPI pydantic object.
        """
        if mode not in choices_mode:
            raise ValueError(f"Mode '{mode}' not in {choices_mode}.")
        if incident_energy not in data:
            self.raise_error(incident_energy, data)
        if emission_energy not in data:
            self.raise_error(incident_energy, data)
        if rixs_map not in data:
            self.raise_error(incident_energy, data)

        if mode == "sum":
            rixs_val = np.sum(data[rixs_map], axis=0)
        elif mode == "mean":
            rixs_val = np.mean(data[rixs_map], axis=0)
        return RIXSModelAPI(
            incident_energy=data[incident_energy],
            emission_energy=data[emission_energy],
            rixs_map=rixs_val,
        )

    @staticmethod
    def raise_error(wrong_key: str, data: Any) -> None:
        """Raise an error if the key is not in the data.

        Args:
            wrong_key (str): The key which is not in the data.
            data (Any): The data dictionary from the pkl file.

        Raises:
            KeyError: If the key is not in the data.

        """
        raise KeyError(
            f"Key '{wrong_key}' not in data. Aailable keys are: {list(data.keys())}."
        )

    def save(self, data: Any, fname: Path, export_format: str) -> None:
        """Save the data to a file.

        Args:
            data (Any): The data to save.
            fname (Path): The filename.
            export_format (str): The file extension for the export.

        Raises:
            ValueError: If the export format is not in the choices.
        """
        if export_format not in choices_export:
            raise ValueError(
                f"Export format '{export_format}' not in {choices_export}."
            )

        if export_format == "json":
            with open(
                pure_fname(fname).with_suffix(f".{export_format}"),
                "w",
                encoding="utf-8",
            ) as f:
                json.dump(self.numpydict2listdict(data), f, indent=4)
        elif export_format in {"toml", "lock"}:
            with open(
                pure_fname(fname).with_suffix(f".{export_format}"),
                "wb",
            ) as f:
                tomli_w.dump(self.numpydict2listdict(data), f, multiline_strings=False)
        elif export_format == "npy":
            np.save(pure_fname(fname).with_suffix(f".{export_format}"), data)
        elif export_format == "npz":
            np.savez(pure_fname(fname).with_suffix(f".{export_format}"), **data)

    @staticmethod
    def numpydict2listdict(data: MutableMapping[str, Any]) -> MutableMapping[str, Any]:
        """Convert a dictionary with numpy arrays to a dictionary with lists.

        Args:
            data (MutableMapping[str, Any]): The data dictionary.

        Returns:
            MutableMapping[str, Any]: The data dictionary with lists.
        """
        return {k: v.tolist() for k, v in data.items()}

    def __call__(self) -> None:
        """Run the converter."""
        args = self.get_args()
        self.save(
            data=self.create_rixs(
                data=self.convert(args["infile"], args["file_format"]),
                incident_energy=args["incident_energy"],
                emission_energy=args["emission_energy"],
                rixs_map=args["rixs_map"],
                mode=args["mode"],
            ).dict(),
            fname=args["infile"],
            export_format=args["export_format"],
        )

__call__()

Run the converter.

Source code in spectrafit/plugins/rixs_converter.py
Python
def __call__(self) -> None:
    """Run the converter."""
    args = self.get_args()
    self.save(
        data=self.create_rixs(
            data=self.convert(args["infile"], args["file_format"]),
            incident_energy=args["incident_energy"],
            emission_energy=args["emission_energy"],
            rixs_map=args["rixs_map"],
            mode=args["mode"],
        ).dict(),
        fname=args["infile"],
        export_format=args["export_format"],
    )

convert(infile, file_format) staticmethod

Convert the pkl file to a dictionary.

Parameters:

Name Type Description Default
infile Path

The input file.

required
file_format str

The file format for the optional encoding of the pickle file.

required

Returns:

Type Description
MutableMapping[str, Any]

MutableMapping[str, Any]: The data dictionary from the pkl file.

Source code in spectrafit/plugins/rixs_converter.py
Python
@staticmethod
def convert(infile: Path, file_format: str) -> MutableMapping[str, Any]:
    """Convert the pkl file to a dictionary.

    Args:
        infile (Path): The input file.
        file_format (str): The file format for the optional encoding of the pickle
            file.

    Returns:
        MutableMapping[str, Any]: The data dictionary from the pkl file.
    """
    data_dict = {}
    for _dict in pkl2any(infile, file_format):
        data_dict.update(_dict)
    return data_dict

create_rixs(data, incident_energy, emission_energy, rixs_map, mode)

Create the RIXS map from the pkl file.

Parameters:

Name Type Description Default
data MutableMapping[str, Any]

The data dictionary from the pkl file.

required
incident_energy str

The name of the incident energy.

required
emission_energy str

The name of the emitted energy.

required
rixs_map str

The name of the RIXS map.

required
mode str

The mode of the RIXS map post-processing, e.g. 'sum' or 'max'.

required

Raises:

Type Description
ValueError

If the mode is not in the choices.

KeyError

If the incident energy, emission energy, or RIXS map is not in the data.

Returns:

Name Type Description
RIXSModelAPI RIXSModelAPI

The RIXS map as a RIXSModelAPI pydantic object.

Source code in spectrafit/plugins/rixs_converter.py
Python
def create_rixs(
    self,
    data: MutableMapping[str, Any],
    incident_energy: str,
    emission_energy: str,
    rixs_map: str,
    mode: str,
) -> RIXSModelAPI:
    """Create the RIXS map from the pkl file.

    Args:
        data (MutableMapping[str, Any]): The data dictionary from the pkl file.
        incident_energy (str): The name of the incident energy.
        emission_energy (str): The name of the emitted energy.
        rixs_map (str): The name of the RIXS map.
        mode (str): The mode of the RIXS map post-processing, e.g. 'sum' or 'max'.

    Raises:
        ValueError: If the mode is not in the choices.
        KeyError: If the incident energy, emission energy, or RIXS map is not in
            the data.

    Returns:
        RIXSModelAPI: The RIXS map as a RIXSModelAPI pydantic object.
    """
    if mode not in choices_mode:
        raise ValueError(f"Mode '{mode}' not in {choices_mode}.")
    if incident_energy not in data:
        self.raise_error(incident_energy, data)
    if emission_energy not in data:
        self.raise_error(incident_energy, data)
    if rixs_map not in data:
        self.raise_error(incident_energy, data)

    if mode == "sum":
        rixs_val = np.sum(data[rixs_map], axis=0)
    elif mode == "mean":
        rixs_val = np.mean(data[rixs_map], axis=0)
    return RIXSModelAPI(
        incident_energy=data[incident_energy],
        emission_energy=data[emission_energy],
        rixs_map=rixs_val,
    )

get_args()

Get the arguments from the command line.

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: Return the input file arguments as a dictionary without additional information beyond the command line arguments.

Source code in spectrafit/plugins/rixs_converter.py
Python
def get_args(self) -> Dict[str, Any]:
    """Get the arguments from the command line.

    Returns:
        Dict[str, Any]: Return the input file arguments as a dictionary without
            additional information beyond the command line arguments.
    """
    parser = argparse.ArgumentParser(
        description="Converter for 'SpectraFit' from pkl files to a JSON, TOML, "
        "or numpy file for RIXS-Visualizer."
    )
    parser.add_argument(
        "infile",
        type=Path,
        help="Filename of the pkl file to convert to JSON, TOML, or numpy.",
    )
    parser.add_argument(
        "-f",
        "--file-format",
        help="File format for the optional encoding of the pickle file."
        " Default is 'latin1'.",
        type=str,
        default="latin1",
        choices=choices_fformat,
    )
    parser.add_argument(
        "-e",
        "--export-format",
        help="File extension for the export.",
        type=str,
        default="json",
        choices=choices_export,
    )
    parser.add_argument(
        "-ie",
        "--incident_energy",
        help="Name of the incident energy",
        type=str,
    )
    parser.add_argument(
        "-ee",
        "--emission_energy",
        help="Name of the emitted energy",
        type=str,
    )
    parser.add_argument(
        "-rm",
        "--rixs_map",
        help="Name of the RIXS map",
    )
    parser.add_argument(
        "-m",
        "--mode",
        help="Mode of the RIXS map post-processing, e.g. 'sum' or 'max'."
        "Default is 'sum'.",
        type=str,
        default="sum",
        choices=choices_mode,
    )
    return vars(parser.parse_args())

numpydict2listdict(data) staticmethod

Convert a dictionary with numpy arrays to a dictionary with lists.

Parameters:

Name Type Description Default
data MutableMapping[str, Any]

The data dictionary.

required

Returns:

Type Description
MutableMapping[str, Any]

MutableMapping[str, Any]: The data dictionary with lists.

Source code in spectrafit/plugins/rixs_converter.py
Python
@staticmethod
def numpydict2listdict(data: MutableMapping[str, Any]) -> MutableMapping[str, Any]:
    """Convert a dictionary with numpy arrays to a dictionary with lists.

    Args:
        data (MutableMapping[str, Any]): The data dictionary.

    Returns:
        MutableMapping[str, Any]: The data dictionary with lists.
    """
    return {k: v.tolist() for k, v in data.items()}

raise_error(wrong_key, data) staticmethod

Raise an error if the key is not in the data.

Parameters:

Name Type Description Default
wrong_key str

The key which is not in the data.

required
data Any

The data dictionary from the pkl file.

required

Raises:

Type Description
KeyError

If the key is not in the data.

Source code in spectrafit/plugins/rixs_converter.py
Python
@staticmethod
def raise_error(wrong_key: str, data: Any) -> None:
    """Raise an error if the key is not in the data.

    Args:
        wrong_key (str): The key which is not in the data.
        data (Any): The data dictionary from the pkl file.

    Raises:
        KeyError: If the key is not in the data.

    """
    raise KeyError(
        f"Key '{wrong_key}' not in data. Aailable keys are: {list(data.keys())}."
    )

save(data, fname, export_format)

Save the data to a file.

Parameters:

Name Type Description Default
data Any

The data to save.

required
fname Path

The filename.

required
export_format str

The file extension for the export.

required

Raises:

Type Description
ValueError

If the export format is not in the choices.

Source code in spectrafit/plugins/rixs_converter.py
Python
def save(self, data: Any, fname: Path, export_format: str) -> None:
    """Save the data to a file.

    Args:
        data (Any): The data to save.
        fname (Path): The filename.
        export_format (str): The file extension for the export.

    Raises:
        ValueError: If the export format is not in the choices.
    """
    if export_format not in choices_export:
        raise ValueError(
            f"Export format '{export_format}' not in {choices_export}."
        )

    if export_format == "json":
        with open(
            pure_fname(fname).with_suffix(f".{export_format}"),
            "w",
            encoding="utf-8",
        ) as f:
            json.dump(self.numpydict2listdict(data), f, indent=4)
    elif export_format in {"toml", "lock"}:
        with open(
            pure_fname(fname).with_suffix(f".{export_format}"),
            "wb",
        ) as f:
            tomli_w.dump(self.numpydict2listdict(data), f, multiline_strings=False)
    elif export_format == "npy":
        np.save(pure_fname(fname).with_suffix(f".{export_format}"), data)
    elif export_format == "npz":
        np.savez(pure_fname(fname).with_suffix(f".{export_format}"), **data)

command_line_runner()

Run the command line script.

Source code in spectrafit/plugins/rixs_converter.py
Python
def command_line_runner() -> None:
    """Run the command line script."""
    RIXSConverter()()

On top of the RIXSConverter class, the RIXSVisualizer class is available to visualize the RIXS data and provide a method to take RIXS cuts.

This module contains the RIXS visualizer class.

RIXSApp

Bases: RIXSFigure

Create the RIXS app.

About the RIXS app

The RIXS app is a web application that allows you to visualize the RIXS data. The app is based on the Dash framework. The app is composed of three figures: the RIXS figure, the XES figure and the XAS figure.

The RIXS figure is a 3D surface plot. The XES figure is a line plot showing the XES spectrum. The XAS figure is a line plot showing the XAS spectrum.

The RIXS figure is interactive. You can zoom in and out, rotate the figure, and change the color scale. The XES and XAS figures are not interactive.

Source code in spectrafit/plugins/rixs_visualizer.py
Python
class RIXSApp(RIXSFigure):  # pragma: no cover
    """Create the RIXS app.

    !!! info "About the RIXS app"

        The RIXS app is a web application that allows you to visualize the RIXS
        data. The app is based on the Dash framework. The app is composed of
        three figures: the RIXS figure, the XES figure and the XAS figure.

        The RIXS figure is a 3D surface plot. The XES figure is a line plot
        showing the XES spectrum. The XAS figure is a line plot showing the XAS
        spectrum.

        The RIXS figure is interactive. You can zoom in and out, rotate the
        figure, and change the color scale. The XES and XAS figures are not
        interactive.

    """

    def __init__(
        self,
        incident_energy: NDArray[np.float64],
        emission_energy: NDArray[np.float64],
        rixs_map: NDArray[np.float64],
        size: SizeRatioAPI = SizeRatioAPI(
            size=(500, 500),
            ratio_rixs=(2, 2),
            ratio_xas=(3, 1),
            ratio_xes=(3, 1),
        ),
        main_title: MainTitleAPI = MainTitleAPI(rixs="RIXS", xes="XES", xas="XAS"),
        fdir: Path = Path("./"),
        mode: str = "server",
        jupyter_dash: bool = False,
        port: int = 8050,
        debug: bool = False,
    ) -> None:
        """Create the RIXS app.

        Args:
            incident_energy (NDArray[np.float64]): Incident energy.
            emission_energy (NDArray[np.float64]): Emission energy.
            rixs_map (NDArray[np.float64]): RIXS data as a 2D array.
            size (SizeRatioAPI, optional): Size of the figures. Defaults to
                 SizeRatioAPI(size=(500, 500), ratio_rixs=(2, 2), ratio_xas=(3, 1),
                 ratio_xes=(3, 1)).
            main_title (MainTitleAPI, optional): Main title of the figures.
                 Defaults to MainTitleAPI(rixs="RIXS", xes="XES", xas="XAS").
            fdir (Path, optional): Directory to save the figures. Defaults to
                 Path("./").
            mode (str, optional): Mode of the app. Defaults to "server".
            port (int, optional): Port of the app. Defaults to 8050.
            jupyter_dash (bool, optional): Jupyter Dash mode. Defaults to False.
            debug (bool, optional): Debug mode. Defaults to False.

        """
        super().__init__(
            incident_energy=incident_energy,
            emission_energy=emission_energy,
            rixs_map=rixs_map,
            size=size,
        )
        self.fdir = fdir
        self.main_title = main_title
        self.mode = mode
        self.jupyter_dash = jupyter_dash
        self.port = port
        self.debug = debug
        if not self.debug:
            self.logging_flask()

    def logging_flask(self) -> None:
        """Set the logging level of the Flask server to ERROR."""
        log = logging.getLogger("werkzeug")
        log.setLevel(logging.ERROR)

    def colorscale(self) -> html.Div:
        """Create the color scale dropdown.

        Returns:
            html.Div: Color scale dropdown.
        """
        return html.Div(
            [
                dbc.Label("Color Scale"),
                dcc.Dropdown(
                    id="colorscale",
                    options=[
                        {"label": "Viridis", "value": "Viridis"},
                        {"label": "Plasma", "value": "Plasma"},
                        {"label": "Inferno", "value": "Inferno"},
                        {"label": "Magma", "value": "Magma"},
                        {"label": "Cividis", "value": "Cividis"},
                        {"label": "Greys", "value": "Greys"},
                        {"label": "Greens", "value": "Greens"},
                        {"label": "YlOrRd", "value": "YlOrRd"},
                        {"label": "Bluered", "value": "Bluered"},
                        {"label": "RdBu", "value": "RdBu"},
                        {"label": "Reds", "value": "Reds"},
                        {"label": "Blues", "value": "Blues"},
                        {"label": "Picnic", "value": "Picnic"},
                        {"label": "Rainbow", "value": "Rainbow"},
                        {"label": "Portland", "value": "Portland"},
                        {"label": "Jet", "value": "Jet"},
                        {"label": "Hot", "value": "Hot"},
                        {"label": "Blackbody", "value": "Blackbody"},
                        {"label": "Earth", "value": "Earth"},
                        {"label": "Electric", "value": "Electric"},
                        {"label": "Viridis", "value": "Viridis"},
                        {"label": "Cividis", "value": "Cividis"},
                    ],
                    value="Viridis",
                ),
            ],
            className="dbc",
        )

    def opacity(self) -> html.Div:
        """Create the opacity slider.

        Returns:
            html.Div: Opacity slider.
        """
        return html.Div(
            [
                dbc.Label("Opacity"),
                dcc.Slider(
                    id="opacity",
                    min=0,
                    max=1,
                    step=0.1,
                    value=1,
                    marks={i: str(i) for i in range(2)},
                ),
            ]
        )

    def header(self) -> dbc.Card:
        """Create the header.

        Returns:
            dbc.Card: Header as a bootstrap card.
        """
        return dbc.Card(
            dbc.CardBody(
                [
                    html.H4(
                        "RIXS Visualizer App",
                        className="bg-primary text-white p-2 mb-2 text-center",
                    )
                ]
            )
        )

    def pre_body(self) -> Tuple[html.Div, html.Div, html.Div]:
        """Create the body.

        Returns:
            Tuple[html.Div, html.Div, html.Div]: Body as a tuple of three plot parts.
        """
        rixs = html.Div(
            [
                dbc.Label(self.main_title.rixs),
                dcc.Graph(id="rixs-figure"),
            ]
        )
        xes = html.Div(
            [
                dbc.Label(self.main_title.xes),
                dcc.Graph(id="xes-figure"),
            ]
        )
        xas = html.Div(
            [
                dbc.Label(self.main_title.xas),
                dcc.Graph(id="xas-figure"),
            ]
        )
        return rixs, xes, xas

    def body(self) -> dbc.Card:
        """Create the body.

        Returns:
            dbc.Card: Body as a bootstrap card.
        """
        colorscale = self.colorscale()
        opacity = self.opacity()
        rixs, xes, xas = self.pre_body()

        return (
            dbc.Card(
                dbc.CardBody(
                    [
                        dbc.Row([ThemeChangerAIO(aio_id="theme")]),
                        dbc.Row(
                            [
                                dbc.Col(
                                    html.H1("RIXS Viewer", className="text-center")
                                ),
                            ],
                            justify="left",
                        ),
                        html.Br(),
                        dbc.Row(
                            [
                                dbc.Col(colorscale),
                                dbc.Col(opacity),
                            ],
                        ),
                        html.Br(),
                        dbc.Row(
                            [
                                dbc.Col(rixs),
                                dbc.Col([xes, xas]),
                            ],
                            justify="left",
                        ),
                        html.Br(),
                    ],
                ),
                class_name="mt-4",
            ),
        )[0]

    def footer(self) -> dbc.Card:
        """Create the footer.

        Returns:
            dbc.Card: Footer as a bootstrap card.
        """
        return (
            dbc.Card(
                dbc.CardBody(
                    [
                        dbc.Row(
                            [
                                dcc.Markdown(
                                    """
                    ### RIXS Viewer
                    This is a simple RIXS viewer. It is based on the
                    [Dash](https://dash.plotly.com/)
                    framework and uses the [Plotly](https://plotly.com/python/) library
                    for plotting. The code is available on
                    [GitHub](https://github.com/anselmoo/spectrafit).
                    """
                                ),
                            ],
                            justify="left",
                        )
                    ]
                ),
                class_name="mt-4",
            ),
        )[0]

    def app_run(self) -> None:
        """Run the app."""
        dbc_css = (
            "https://cdn.jsdelivr.net/gh/AnnMarieW/dash-bootstrap-templates/dbc.min.css"
        )
        external_stylesheets = [dbc.themes.COSMO, dbc_css]
        if self.jupyter_dash:
            app = JupyterDash(__name__, external_stylesheets=external_stylesheets)
        else:
            app = dash.Dash(
                __name__,
                external_stylesheets=external_stylesheets,
                meta_tags=[
                    {
                        "name": "viewport",
                        "content": "width=device-width, initial-scale=1",
                    }
                ],
            )
        app.layout = dbc.Container(
            [
                self.header(),
                self.body(),
                self.footer(),
            ],
            fluid=True,
        )

        @app.callback(
            [
                dash.dependencies.Output("xes-figure", "figure"),
                dash.dependencies.Output("xas-figure", "figure"),
                dash.dependencies.Output("rixs-figure", "figure"),
            ],
            [
                dash.dependencies.Input("rixs-figure", "hoverData"),
                dash.dependencies.Input("rixs-figure", "clickData"),
                dash.dependencies.Input("colorscale", "value"),
                dash.dependencies.Input("opacity", "value"),
                dash.dependencies.Input(ThemeChangerAIO.ids.radio("theme"), "value"),
            ],
        )
        def update_hover_data(
            hoverData: Dict[str, List[Dict[str, float]]],
            clickData: Dict[str, List[Dict[str, float]]],
            colorscale: str,
            opacity: float,
            theme: str,
        ) -> Tuple[go.Figure, go.Figure, go.Figure]:
            if hoverData is None:
                return (
                    self.create_xas(
                        x=self.incident_energy,
                        y=self.rixs_map[:, int(self.emission_energy.size / 2)],
                        template=template_from_url(theme),
                    ),
                    self.create_xes(
                        x=self.emission_energy,
                        y=self.rixs_map[int(self.incident_energy.size / 2), :],
                        template=template_from_url(theme),
                    ),
                    self.create_rixs(
                        colorscale=colorscale,
                        opacity=opacity,
                        template=template_from_url(theme),
                    ),
                )
            x = hoverData["points"][0]["x"]
            y = hoverData["points"][0]["y"]
            xes_fig = self.create_xas(
                x=self.incident_energy,
                y=self.rixs_map[:, int(x)],
                template=template_from_url(theme),
            )
            xas_fig = self.create_xes(
                x=self.emission_energy,
                y=self.rixs_map[int(y), :],
                template=template_from_url(theme),
            )
            rixs_fig = self.create_rixs(
                colorscale=colorscale,
                opacity=opacity,
                template=template_from_url(theme),
            )
            if clickData is None:
                return xes_fig, xas_fig, rixs_fig
            cx = clickData["points"][0]["x"]
            cy = clickData["points"][0]["y"]
            pd.DataFrame(
                {"energy": self.emission_energy, "intensity": self.rixs_map[int(cy), :]}
            ).to_csv(
                self.fdir / f"xes_cut_{np.round(cx, 8)}.txt",
                index=False,
            )
            pd.DataFrame(
                {"energy": self.incident_energy, "intensity": self.rixs_map[:, int(cx)]}
            ).to_csv(
                self.fdir / f"xas_cut_{np.round(cy, 8)}.txt",
                index=False,
            )
            return xes_fig, xas_fig, rixs_fig

        if self.jupyter_dash:
            app.run_server(mode=self.mode, debug=self.debug, port=self.port)
        else:
            app.run_server(debug=self.debug, port=self.port)

__init__(incident_energy, emission_energy, rixs_map, size=SizeRatioAPI(size=(500, 500), ratio_rixs=(2, 2), ratio_xas=(3, 1), ratio_xes=(3, 1)), main_title=MainTitleAPI(rixs='RIXS', xes='XES', xas='XAS'), fdir=Path('./'), mode='server', jupyter_dash=False, port=8050, debug=False)

Create the RIXS app.

Parameters:

Name Type Description Default
incident_energy NDArray[np.float64]

Incident energy.

required
emission_energy NDArray[np.float64]

Emission energy.

required
rixs_map NDArray[np.float64]

RIXS data as a 2D array.

required
size SizeRatioAPI

Size of the figures. Defaults to SizeRatioAPI(size=(500, 500), ratio_rixs=(2, 2), ratio_xas=(3, 1), ratio_xes=(3, 1)).

SizeRatioAPI(size=(500, 500), ratio_rixs=(2, 2), ratio_xas=(3, 1), ratio_xes=(3, 1))
main_title MainTitleAPI

Main title of the figures. Defaults to MainTitleAPI(rixs="RIXS", xes="XES", xas="XAS").

MainTitleAPI(rixs='RIXS', xes='XES', xas='XAS')
fdir Path

Directory to save the figures. Defaults to Path("./").

Path('./')
mode str

Mode of the app. Defaults to "server".

'server'
port int

Port of the app. Defaults to 8050.

8050
jupyter_dash bool

Jupyter Dash mode. Defaults to False.

False
debug bool

Debug mode. Defaults to False.

False
Source code in spectrafit/plugins/rixs_visualizer.py
Python
def __init__(
    self,
    incident_energy: NDArray[np.float64],
    emission_energy: NDArray[np.float64],
    rixs_map: NDArray[np.float64],
    size: SizeRatioAPI = SizeRatioAPI(
        size=(500, 500),
        ratio_rixs=(2, 2),
        ratio_xas=(3, 1),
        ratio_xes=(3, 1),
    ),
    main_title: MainTitleAPI = MainTitleAPI(rixs="RIXS", xes="XES", xas="XAS"),
    fdir: Path = Path("./"),
    mode: str = "server",
    jupyter_dash: bool = False,
    port: int = 8050,
    debug: bool = False,
) -> None:
    """Create the RIXS app.

    Args:
        incident_energy (NDArray[np.float64]): Incident energy.
        emission_energy (NDArray[np.float64]): Emission energy.
        rixs_map (NDArray[np.float64]): RIXS data as a 2D array.
        size (SizeRatioAPI, optional): Size of the figures. Defaults to
             SizeRatioAPI(size=(500, 500), ratio_rixs=(2, 2), ratio_xas=(3, 1),
             ratio_xes=(3, 1)).
        main_title (MainTitleAPI, optional): Main title of the figures.
             Defaults to MainTitleAPI(rixs="RIXS", xes="XES", xas="XAS").
        fdir (Path, optional): Directory to save the figures. Defaults to
             Path("./").
        mode (str, optional): Mode of the app. Defaults to "server".
        port (int, optional): Port of the app. Defaults to 8050.
        jupyter_dash (bool, optional): Jupyter Dash mode. Defaults to False.
        debug (bool, optional): Debug mode. Defaults to False.

    """
    super().__init__(
        incident_energy=incident_energy,
        emission_energy=emission_energy,
        rixs_map=rixs_map,
        size=size,
    )
    self.fdir = fdir
    self.main_title = main_title
    self.mode = mode
    self.jupyter_dash = jupyter_dash
    self.port = port
    self.debug = debug
    if not self.debug:
        self.logging_flask()

app_run()

Run the app.

Source code in spectrafit/plugins/rixs_visualizer.py
Python
def app_run(self) -> None:
    """Run the app."""
    dbc_css = (
        "https://cdn.jsdelivr.net/gh/AnnMarieW/dash-bootstrap-templates/dbc.min.css"
    )
    external_stylesheets = [dbc.themes.COSMO, dbc_css]
    if self.jupyter_dash:
        app = JupyterDash(__name__, external_stylesheets=external_stylesheets)
    else:
        app = dash.Dash(
            __name__,
            external_stylesheets=external_stylesheets,
            meta_tags=[
                {
                    "name": "viewport",
                    "content": "width=device-width, initial-scale=1",
                }
            ],
        )
    app.layout = dbc.Container(
        [
            self.header(),
            self.body(),
            self.footer(),
        ],
        fluid=True,
    )

    @app.callback(
        [
            dash.dependencies.Output("xes-figure", "figure"),
            dash.dependencies.Output("xas-figure", "figure"),
            dash.dependencies.Output("rixs-figure", "figure"),
        ],
        [
            dash.dependencies.Input("rixs-figure", "hoverData"),
            dash.dependencies.Input("rixs-figure", "clickData"),
            dash.dependencies.Input("colorscale", "value"),
            dash.dependencies.Input("opacity", "value"),
            dash.dependencies.Input(ThemeChangerAIO.ids.radio("theme"), "value"),
        ],
    )
    def update_hover_data(
        hoverData: Dict[str, List[Dict[str, float]]],
        clickData: Dict[str, List[Dict[str, float]]],
        colorscale: str,
        opacity: float,
        theme: str,
    ) -> Tuple[go.Figure, go.Figure, go.Figure]:
        if hoverData is None:
            return (
                self.create_xas(
                    x=self.incident_energy,
                    y=self.rixs_map[:, int(self.emission_energy.size / 2)],
                    template=template_from_url(theme),
                ),
                self.create_xes(
                    x=self.emission_energy,
                    y=self.rixs_map[int(self.incident_energy.size / 2), :],
                    template=template_from_url(theme),
                ),
                self.create_rixs(
                    colorscale=colorscale,
                    opacity=opacity,
                    template=template_from_url(theme),
                ),
            )
        x = hoverData["points"][0]["x"]
        y = hoverData["points"][0]["y"]
        xes_fig = self.create_xas(
            x=self.incident_energy,
            y=self.rixs_map[:, int(x)],
            template=template_from_url(theme),
        )
        xas_fig = self.create_xes(
            x=self.emission_energy,
            y=self.rixs_map[int(y), :],
            template=template_from_url(theme),
        )
        rixs_fig = self.create_rixs(
            colorscale=colorscale,
            opacity=opacity,
            template=template_from_url(theme),
        )
        if clickData is None:
            return xes_fig, xas_fig, rixs_fig
        cx = clickData["points"][0]["x"]
        cy = clickData["points"][0]["y"]
        pd.DataFrame(
            {"energy": self.emission_energy, "intensity": self.rixs_map[int(cy), :]}
        ).to_csv(
            self.fdir / f"xes_cut_{np.round(cx, 8)}.txt",
            index=False,
        )
        pd.DataFrame(
            {"energy": self.incident_energy, "intensity": self.rixs_map[:, int(cx)]}
        ).to_csv(
            self.fdir / f"xas_cut_{np.round(cy, 8)}.txt",
            index=False,
        )
        return xes_fig, xas_fig, rixs_fig

    if self.jupyter_dash:
        app.run_server(mode=self.mode, debug=self.debug, port=self.port)
    else:
        app.run_server(debug=self.debug, port=self.port)

body()

Create the body.

Returns:

Type Description
dbc.Card

dbc.Card: Body as a bootstrap card.

Source code in spectrafit/plugins/rixs_visualizer.py
Python
def body(self) -> dbc.Card:
    """Create the body.

    Returns:
        dbc.Card: Body as a bootstrap card.
    """
    colorscale = self.colorscale()
    opacity = self.opacity()
    rixs, xes, xas = self.pre_body()

    return (
        dbc.Card(
            dbc.CardBody(
                [
                    dbc.Row([ThemeChangerAIO(aio_id="theme")]),
                    dbc.Row(
                        [
                            dbc.Col(
                                html.H1("RIXS Viewer", className="text-center")
                            ),
                        ],
                        justify="left",
                    ),
                    html.Br(),
                    dbc.Row(
                        [
                            dbc.Col(colorscale),
                            dbc.Col(opacity),
                        ],
                    ),
                    html.Br(),
                    dbc.Row(
                        [
                            dbc.Col(rixs),
                            dbc.Col([xes, xas]),
                        ],
                        justify="left",
                    ),
                    html.Br(),
                ],
            ),
            class_name="mt-4",
        ),
    )[0]

colorscale()

Create the color scale dropdown.

Returns:

Type Description
html.Div

html.Div: Color scale dropdown.

Source code in spectrafit/plugins/rixs_visualizer.py
Python
def colorscale(self) -> html.Div:
    """Create the color scale dropdown.

    Returns:
        html.Div: Color scale dropdown.
    """
    return html.Div(
        [
            dbc.Label("Color Scale"),
            dcc.Dropdown(
                id="colorscale",
                options=[
                    {"label": "Viridis", "value": "Viridis"},
                    {"label": "Plasma", "value": "Plasma"},
                    {"label": "Inferno", "value": "Inferno"},
                    {"label": "Magma", "value": "Magma"},
                    {"label": "Cividis", "value": "Cividis"},
                    {"label": "Greys", "value": "Greys"},
                    {"label": "Greens", "value": "Greens"},
                    {"label": "YlOrRd", "value": "YlOrRd"},
                    {"label": "Bluered", "value": "Bluered"},
                    {"label": "RdBu", "value": "RdBu"},
                    {"label": "Reds", "value": "Reds"},
                    {"label": "Blues", "value": "Blues"},
                    {"label": "Picnic", "value": "Picnic"},
                    {"label": "Rainbow", "value": "Rainbow"},
                    {"label": "Portland", "value": "Portland"},
                    {"label": "Jet", "value": "Jet"},
                    {"label": "Hot", "value": "Hot"},
                    {"label": "Blackbody", "value": "Blackbody"},
                    {"label": "Earth", "value": "Earth"},
                    {"label": "Electric", "value": "Electric"},
                    {"label": "Viridis", "value": "Viridis"},
                    {"label": "Cividis", "value": "Cividis"},
                ],
                value="Viridis",
            ),
        ],
        className="dbc",
    )

footer()

Create the footer.

Returns:

Type Description
dbc.Card

dbc.Card: Footer as a bootstrap card.

Source code in spectrafit/plugins/rixs_visualizer.py
Python
def footer(self) -> dbc.Card:
    """Create the footer.

    Returns:
        dbc.Card: Footer as a bootstrap card.
    """
    return (
        dbc.Card(
            dbc.CardBody(
                [
                    dbc.Row(
                        [
                            dcc.Markdown(
                                """
                ### RIXS Viewer
                This is a simple RIXS viewer. It is based on the
                [Dash](https://dash.plotly.com/)
                framework and uses the [Plotly](https://plotly.com/python/) library
                for plotting. The code is available on
                [GitHub](https://github.com/anselmoo/spectrafit).
                """
                            ),
                        ],
                        justify="left",
                    )
                ]
            ),
            class_name="mt-4",
        ),
    )[0]

header()

Create the header.

Returns:

Type Description
dbc.Card

dbc.Card: Header as a bootstrap card.

Source code in spectrafit/plugins/rixs_visualizer.py
Python
def header(self) -> dbc.Card:
    """Create the header.

    Returns:
        dbc.Card: Header as a bootstrap card.
    """
    return dbc.Card(
        dbc.CardBody(
            [
                html.H4(
                    "RIXS Visualizer App",
                    className="bg-primary text-white p-2 mb-2 text-center",
                )
            ]
        )
    )

logging_flask()

Set the logging level of the Flask server to ERROR.

Source code in spectrafit/plugins/rixs_visualizer.py
Python
def logging_flask(self) -> None:
    """Set the logging level of the Flask server to ERROR."""
    log = logging.getLogger("werkzeug")
    log.setLevel(logging.ERROR)

opacity()

Create the opacity slider.

Returns:

Type Description
html.Div

html.Div: Opacity slider.

Source code in spectrafit/plugins/rixs_visualizer.py
Python
def opacity(self) -> html.Div:
    """Create the opacity slider.

    Returns:
        html.Div: Opacity slider.
    """
    return html.Div(
        [
            dbc.Label("Opacity"),
            dcc.Slider(
                id="opacity",
                min=0,
                max=1,
                step=0.1,
                value=1,
                marks={i: str(i) for i in range(2)},
            ),
        ]
    )

pre_body()

Create the body.

Returns:

Type Description
Tuple[html.Div, html.Div, html.Div]

Tuple[html.Div, html.Div, html.Div]: Body as a tuple of three plot parts.

Source code in spectrafit/plugins/rixs_visualizer.py
Python
def pre_body(self) -> Tuple[html.Div, html.Div, html.Div]:
    """Create the body.

    Returns:
        Tuple[html.Div, html.Div, html.Div]: Body as a tuple of three plot parts.
    """
    rixs = html.Div(
        [
            dbc.Label(self.main_title.rixs),
            dcc.Graph(id="rixs-figure"),
        ]
    )
    xes = html.Div(
        [
            dbc.Label(self.main_title.xes),
            dcc.Graph(id="xes-figure"),
        ]
    )
    xas = html.Div(
        [
            dbc.Label(self.main_title.xas),
            dcc.Graph(id="xas-figure"),
        ]
    )
    return rixs, xes, xas

RIXSFigure

Class to create the RIXS figure.

About the RIXS figure

The RIXS figure is composed of three subplots:

  • RIXS -> 3D plot
  • XES -> 2D plot
  • XAS -> 2D plot
Source code in spectrafit/plugins/rixs_visualizer.py
Python
class RIXSFigure:
    """Class to create the RIXS figure.

    !!! info "About the RIXS figure"

        The RIXS figure is composed of three subplots:

        - RIXS -> 3D plot
        - XES -> 2D plot
        - XAS -> 2D plot

    """

    def __init__(
        self,
        incident_energy: NDArray[np.float64],
        emission_energy: NDArray[np.float64],
        rixs_map: NDArray[np.float64],
        size: SizeRatioAPI = SizeRatioAPI(
            size=(500, 500),
            ratio_rixs=(2, 2),
            ratio_xes=(3, 1),
            ratio_xas=(3, 1),
        ),
        x_axis: XAxisAPI = XAxisAPI(name="Incident Energy", unit="eV"),
        y_axis: YAxisAPI = YAxisAPI(name="Emission Energy", unit="eV"),
        z_axis: ZAxisAPI = ZAxisAPI(name="Intensity", unit="a.u."),
    ):
        """Initialize the RIXS figure.

        Args:
            incident_energy (NDArray[np.float64]): Incident energy.
            emission_energy (NDArray[np.float64]): Emission energy.
            rixs_map (NDArray[np.float64]): RIXS data as a 2D array.
            size (SizeRatioAPI, optional): Size of the figure.
                 Defaults to SizeRatioAPI(size=(500, 500), ratio_rixs=(2, 2),
                 ratio_xes=(3, 1), ratio_xas=(3, 1)).
            x_axis (XAxisAPI, optional): X-Axis of the figure.
                 Defaults to XAxisAPI(name="Incident Energy", unit="eV").
            y_axis (YAxisAPI, optional): Y-Axis of the figure.
                 Defaults to YAxisAPI(name="Emission Energy", unit="eV").
            z_axis (ZAxisAPI, optional): Z-Axis of the figure.
                 Defaults to ZAxisAPI(name="Intensity", unit="a.u.").
        """
        self.incident_energy = incident_energy
        self.emission_energy = emission_energy
        self.rixs_map = rixs_map

        self.x_axis = x_axis
        self.y_axis = y_axis
        self.z_axis = z_axis
        self.initialize_figure_size(size)

    def initialize_figure_size(self, size: SizeRatioAPI) -> None:
        """Initialize the size of the figure.

        Args:
            size (SizeRatioAPI): Size of the figure.
        """
        self.rixs_width = int(size.size[0] * size.ratio_rixs[0])
        self.rixs_height = int(size.size[1] * size.ratio_rixs[1])
        self.xas_width = int(size.size[0] * size.ratio_xas[0])
        self.xas_height = int(size.size[1] * size.ratio_xas[1])
        self.xes_width = int(size.size[0] * size.ratio_xes[0])
        self.xes_height = int(size.size[1] * size.ratio_xes[1])

    def create_rixs(
        self,
        colorscale: str = "Viridis",
        opacity: float = 0.9,
        template: Optional[str] = None,
    ) -> go.Figure:
        """Create the RIXS figure.

        Args:
            colorscale (str, optional): Color scale. Defaults to "Viridis".
            opacity (float, optional): Opacity of the surface. Defaults to 0.9.
            template (str, optional): Template of the figure. Defaults to None.

        Returns:
            go.Figure: RIXS figure.
        """
        fig = go.Figure(
            data=[
                go.Surface(
                    x=self.incident_energy,
                    y=self.emission_energy,
                    z=self.rixs_map,
                    colorscale=colorscale,
                    opacity=opacity,
                    contours_z=dict(
                        show=True,
                        usecolormap=True,
                        highlightcolor="limegreen",
                        project_z=True,
                    ),
                )
            ],
        )

        fig.update_layout(
            autosize=True,
            width=self.rixs_width,
            height=self.rixs_height,
            scene=dict(
                xaxis_title=DataFramePlot.title_text(
                    name=self.x_axis.name, unit=self.x_axis.unit
                ),
                yaxis_title=DataFramePlot.title_text(
                    name=self.y_axis.name, unit=self.y_axis.unit
                ),
                zaxis_title=DataFramePlot.title_text(
                    name=self.z_axis.name, unit=self.z_axis.unit
                ),
                aspectmode="cube",
            ),
            template=template,
        )
        fig.update_traces(
            contours_z=dict(
                show=True, usecolormap=True, highlightcolor="limegreen", project_z=True
            )
        )
        return fig

    def create_xes(
        self,
        x: NDArray[np.float64],
        y: NDArray[np.float64],
        template: Optional[str] = None,
    ) -> go.Figure:
        """Create the XES figure.

        Args:
            x (NDArray[np.float64]): X-axis of the figure.
            y (NDArray[np.float64]): Y-axis of the figure.
            template (str, optional): Template of the figure. Defaults to None.

        Returns:
            go.Figure: XES figure.
        """
        fig = px.line(x=x, y=y, template=template)
        fig.update_layout(
            autosize=True,
            width=self.xes_width,
            height=self.xes_height,
        )
        # Udate the xaxis title
        fig.update_xaxes(
            title_text=DataFramePlot.title_text(
                name=self.y_axis.name, unit=self.y_axis.unit
            )
        )
        # Update the yaxis title
        fig.update_yaxes(
            title_text=DataFramePlot.title_text(
                name=self.z_axis.name, unit=self.z_axis.unit
            )
        )
        return fig

    def create_xas(
        self,
        x: NDArray[np.float64],
        y: NDArray[np.float64],
        template: Optional[str] = None,
    ) -> go.Figure:
        """Create the XAS figure.

        Args:
            x (NDArray[np.float64]): X-axis of the figure.
            y (NDArray[np.float64]): Y-axis of the figure.
            template (str, optional): Template of the figure. Defaults to None.

        Returns:
            go.Figure: XAS figure.
        """
        fig = px.line(x=x, y=y, template=template)
        fig.update_layout(
            autosize=True,
            width=self.xas_width,
            height=self.xas_height,
        )
        fig.update_xaxes(
            title_text=DataFramePlot.title_text(
                name=self.x_axis.name, unit=self.x_axis.unit
            )
        )
        fig.update_yaxes(
            title_text=DataFramePlot.title_text(
                name=self.z_axis.name, unit=self.z_axis.unit
            )
        )
        return fig

__init__(incident_energy, emission_energy, rixs_map, size=SizeRatioAPI(size=(500, 500), ratio_rixs=(2, 2), ratio_xes=(3, 1), ratio_xas=(3, 1)), x_axis=XAxisAPI(name='Incident Energy', unit='eV'), y_axis=YAxisAPI(name='Emission Energy', unit='eV'), z_axis=ZAxisAPI(name='Intensity', unit='a.u.'))

Initialize the RIXS figure.

Parameters:

Name Type Description Default
incident_energy NDArray[np.float64]

Incident energy.

required
emission_energy NDArray[np.float64]

Emission energy.

required
rixs_map NDArray[np.float64]

RIXS data as a 2D array.

required
size SizeRatioAPI

Size of the figure. Defaults to SizeRatioAPI(size=(500, 500), ratio_rixs=(2, 2), ratio_xes=(3, 1), ratio_xas=(3, 1)).

SizeRatioAPI(size=(500, 500), ratio_rixs=(2, 2), ratio_xes=(3, 1), ratio_xas=(3, 1))
x_axis XAxisAPI

X-Axis of the figure. Defaults to XAxisAPI(name="Incident Energy", unit="eV").

XAxisAPI(name='Incident Energy', unit='eV')
y_axis YAxisAPI

Y-Axis of the figure. Defaults to YAxisAPI(name="Emission Energy", unit="eV").

YAxisAPI(name='Emission Energy', unit='eV')
z_axis ZAxisAPI

Z-Axis of the figure. Defaults to ZAxisAPI(name="Intensity", unit="a.u.").

ZAxisAPI(name='Intensity', unit='a.u.')
Source code in spectrafit/plugins/rixs_visualizer.py
Python
def __init__(
    self,
    incident_energy: NDArray[np.float64],
    emission_energy: NDArray[np.float64],
    rixs_map: NDArray[np.float64],
    size: SizeRatioAPI = SizeRatioAPI(
        size=(500, 500),
        ratio_rixs=(2, 2),
        ratio_xes=(3, 1),
        ratio_xas=(3, 1),
    ),
    x_axis: XAxisAPI = XAxisAPI(name="Incident Energy", unit="eV"),
    y_axis: YAxisAPI = YAxisAPI(name="Emission Energy", unit="eV"),
    z_axis: ZAxisAPI = ZAxisAPI(name="Intensity", unit="a.u."),
):
    """Initialize the RIXS figure.

    Args:
        incident_energy (NDArray[np.float64]): Incident energy.
        emission_energy (NDArray[np.float64]): Emission energy.
        rixs_map (NDArray[np.float64]): RIXS data as a 2D array.
        size (SizeRatioAPI, optional): Size of the figure.
             Defaults to SizeRatioAPI(size=(500, 500), ratio_rixs=(2, 2),
             ratio_xes=(3, 1), ratio_xas=(3, 1)).
        x_axis (XAxisAPI, optional): X-Axis of the figure.
             Defaults to XAxisAPI(name="Incident Energy", unit="eV").
        y_axis (YAxisAPI, optional): Y-Axis of the figure.
             Defaults to YAxisAPI(name="Emission Energy", unit="eV").
        z_axis (ZAxisAPI, optional): Z-Axis of the figure.
             Defaults to ZAxisAPI(name="Intensity", unit="a.u.").
    """
    self.incident_energy = incident_energy
    self.emission_energy = emission_energy
    self.rixs_map = rixs_map

    self.x_axis = x_axis
    self.y_axis = y_axis
    self.z_axis = z_axis
    self.initialize_figure_size(size)

create_rixs(colorscale='Viridis', opacity=0.9, template=None)

Create the RIXS figure.

Parameters:

Name Type Description Default
colorscale str

Color scale. Defaults to "Viridis".

'Viridis'
opacity float

Opacity of the surface. Defaults to 0.9.

0.9
template str

Template of the figure. Defaults to None.

None

Returns:

Type Description
go.Figure

go.Figure: RIXS figure.

Source code in spectrafit/plugins/rixs_visualizer.py
Python
def create_rixs(
    self,
    colorscale: str = "Viridis",
    opacity: float = 0.9,
    template: Optional[str] = None,
) -> go.Figure:
    """Create the RIXS figure.

    Args:
        colorscale (str, optional): Color scale. Defaults to "Viridis".
        opacity (float, optional): Opacity of the surface. Defaults to 0.9.
        template (str, optional): Template of the figure. Defaults to None.

    Returns:
        go.Figure: RIXS figure.
    """
    fig = go.Figure(
        data=[
            go.Surface(
                x=self.incident_energy,
                y=self.emission_energy,
                z=self.rixs_map,
                colorscale=colorscale,
                opacity=opacity,
                contours_z=dict(
                    show=True,
                    usecolormap=True,
                    highlightcolor="limegreen",
                    project_z=True,
                ),
            )
        ],
    )

    fig.update_layout(
        autosize=True,
        width=self.rixs_width,
        height=self.rixs_height,
        scene=dict(
            xaxis_title=DataFramePlot.title_text(
                name=self.x_axis.name, unit=self.x_axis.unit
            ),
            yaxis_title=DataFramePlot.title_text(
                name=self.y_axis.name, unit=self.y_axis.unit
            ),
            zaxis_title=DataFramePlot.title_text(
                name=self.z_axis.name, unit=self.z_axis.unit
            ),
            aspectmode="cube",
        ),
        template=template,
    )
    fig.update_traces(
        contours_z=dict(
            show=True, usecolormap=True, highlightcolor="limegreen", project_z=True
        )
    )
    return fig

create_xas(x, y, template=None)

Create the XAS figure.

Parameters:

Name Type Description Default
x NDArray[np.float64]

X-axis of the figure.

required
y NDArray[np.float64]

Y-axis of the figure.

required
template str

Template of the figure. Defaults to None.

None

Returns:

Type Description
go.Figure

go.Figure: XAS figure.

Source code in spectrafit/plugins/rixs_visualizer.py
Python
def create_xas(
    self,
    x: NDArray[np.float64],
    y: NDArray[np.float64],
    template: Optional[str] = None,
) -> go.Figure:
    """Create the XAS figure.

    Args:
        x (NDArray[np.float64]): X-axis of the figure.
        y (NDArray[np.float64]): Y-axis of the figure.
        template (str, optional): Template of the figure. Defaults to None.

    Returns:
        go.Figure: XAS figure.
    """
    fig = px.line(x=x, y=y, template=template)
    fig.update_layout(
        autosize=True,
        width=self.xas_width,
        height=self.xas_height,
    )
    fig.update_xaxes(
        title_text=DataFramePlot.title_text(
            name=self.x_axis.name, unit=self.x_axis.unit
        )
    )
    fig.update_yaxes(
        title_text=DataFramePlot.title_text(
            name=self.z_axis.name, unit=self.z_axis.unit
        )
    )
    return fig

create_xes(x, y, template=None)

Create the XES figure.

Parameters:

Name Type Description Default
x NDArray[np.float64]

X-axis of the figure.

required
y NDArray[np.float64]

Y-axis of the figure.

required
template str

Template of the figure. Defaults to None.

None

Returns:

Type Description
go.Figure

go.Figure: XES figure.

Source code in spectrafit/plugins/rixs_visualizer.py
Python
def create_xes(
    self,
    x: NDArray[np.float64],
    y: NDArray[np.float64],
    template: Optional[str] = None,
) -> go.Figure:
    """Create the XES figure.

    Args:
        x (NDArray[np.float64]): X-axis of the figure.
        y (NDArray[np.float64]): Y-axis of the figure.
        template (str, optional): Template of the figure. Defaults to None.

    Returns:
        go.Figure: XES figure.
    """
    fig = px.line(x=x, y=y, template=template)
    fig.update_layout(
        autosize=True,
        width=self.xes_width,
        height=self.xes_height,
    )
    # Udate the xaxis title
    fig.update_xaxes(
        title_text=DataFramePlot.title_text(
            name=self.y_axis.name, unit=self.y_axis.unit
        )
    )
    # Update the yaxis title
    fig.update_yaxes(
        title_text=DataFramePlot.title_text(
            name=self.z_axis.name, unit=self.z_axis.unit
        )
    )
    return fig

initialize_figure_size(size)

Initialize the size of the figure.

Parameters:

Name Type Description Default
size SizeRatioAPI

Size of the figure.

required
Source code in spectrafit/plugins/rixs_visualizer.py
Python
def initialize_figure_size(self, size: SizeRatioAPI) -> None:
    """Initialize the size of the figure.

    Args:
        size (SizeRatioAPI): Size of the figure.
    """
    self.rixs_width = int(size.size[0] * size.ratio_rixs[0])
    self.rixs_height = int(size.size[1] * size.ratio_rixs[1])
    self.xas_width = int(size.size[0] * size.ratio_xas[0])
    self.xas_height = int(size.size[1] * size.ratio_xas[1])
    self.xes_width = int(size.size[0] * size.ratio_xes[0])
    self.xes_height = int(size.size[1] * size.ratio_xes[1])

RIXSVisualizer

RIXS Visualizer. This class is used to visualize RIXS data.

Source code in spectrafit/plugins/rixs_visualizer.py
Python
class RIXSVisualizer:
    """RIXS Visualizer. This class is used to visualize RIXS data."""

    def get_args(self) -> Dict[str, Any]:
        """Get the arguments from the command line.

        Returns:
            Dict[str, Any]: Return the input file arguments as a dictionary without
                additional information beyond the command line arguments.
        """
        parser = argparse.ArgumentParser(
            description="`RIXS-Visualizer` is a simple RIXS plane viewer, which "
            "allows to visualize RIXS data in a 2D plane."
        )
        parser.add_argument(
            "infile",
            type=Path,
            help="The input file. This can be a json, toml, npy, or npz file.",
        )
        return vars(parser.parse_args())

    @staticmethod
    def load_data(infile: Path) -> RIXSModelAPI:
        """Load the data from the input file.

        Args:
            infile (Path): The input file path. This can be a json, toml, npy, or npz
                file.

        Raises:
            ValueError: If the file type is not supported.

        Returns:
            RIXSModelAPI: The data as a pydantic model object with the following
                attributes: incident_energy, emission_energy, and rixs_map. The
                incident_energy and emission_energy are 1D arrays, and the rixs_map is
                a 2D array.
        """
        if infile.suffix == ".npy":
            data = np.load(infile, allow_pickle=True).item()
        elif infile.suffix == ".npz":
            data = np.load(infile, allow_pickle=True)
        elif infile.suffix == ".json":
            with open(infile, encoding="utf-8") as f:
                data = json.load(f)
        elif infile.suffix in {".toml", ".lock"}:
            with open(infile, "rb") as f:
                data = tomli.load(f)
        else:
            raise ValueError(f"File type {infile.suffix} is not supported.")
        return RIXSModelAPI(
            incident_energy=np.array(data["incident_energy"]),
            emission_energy=np.array(data["emission_energy"]),
            rixs_map=np.array(data["rixs_map"]),
        )

    def __call__(self) -> None:  # pragma: no cover
        """Run the RIXS Visualizer."""
        app = RIXSApp(**self.load_data(self.get_args()["infile"]).dict())
        app.app_run()

__call__()

Run the RIXS Visualizer.

Source code in spectrafit/plugins/rixs_visualizer.py
Python
def __call__(self) -> None:  # pragma: no cover
    """Run the RIXS Visualizer."""
    app = RIXSApp(**self.load_data(self.get_args()["infile"]).dict())
    app.app_run()

get_args()

Get the arguments from the command line.

Returns:

Type Description
Dict[str, Any]

Dict[str, Any]: Return the input file arguments as a dictionary without additional information beyond the command line arguments.

Source code in spectrafit/plugins/rixs_visualizer.py
Python
def get_args(self) -> Dict[str, Any]:
    """Get the arguments from the command line.

    Returns:
        Dict[str, Any]: Return the input file arguments as a dictionary without
            additional information beyond the command line arguments.
    """
    parser = argparse.ArgumentParser(
        description="`RIXS-Visualizer` is a simple RIXS plane viewer, which "
        "allows to visualize RIXS data in a 2D plane."
    )
    parser.add_argument(
        "infile",
        type=Path,
        help="The input file. This can be a json, toml, npy, or npz file.",
    )
    return vars(parser.parse_args())

load_data(infile) staticmethod

Load the data from the input file.

Parameters:

Name Type Description Default
infile Path

The input file path. This can be a json, toml, npy, or npz file.

required

Raises:

Type Description
ValueError

If the file type is not supported.

Returns:

Name Type Description
RIXSModelAPI RIXSModelAPI

The data as a pydantic model object with the following attributes: incident_energy, emission_energy, and rixs_map. The incident_energy and emission_energy are 1D arrays, and the rixs_map is a 2D array.

Source code in spectrafit/plugins/rixs_visualizer.py
Python
@staticmethod
def load_data(infile: Path) -> RIXSModelAPI:
    """Load the data from the input file.

    Args:
        infile (Path): The input file path. This can be a json, toml, npy, or npz
            file.

    Raises:
        ValueError: If the file type is not supported.

    Returns:
        RIXSModelAPI: The data as a pydantic model object with the following
            attributes: incident_energy, emission_energy, and rixs_map. The
            incident_energy and emission_energy are 1D arrays, and the rixs_map is
            a 2D array.
    """
    if infile.suffix == ".npy":
        data = np.load(infile, allow_pickle=True).item()
    elif infile.suffix == ".npz":
        data = np.load(infile, allow_pickle=True)
    elif infile.suffix == ".json":
        with open(infile, encoding="utf-8") as f:
            data = json.load(f)
    elif infile.suffix in {".toml", ".lock"}:
        with open(infile, "rb") as f:
            data = tomli.load(f)
    else:
        raise ValueError(f"File type {infile.suffix} is not supported.")
    return RIXSModelAPI(
        incident_energy=np.array(data["incident_energy"]),
        emission_energy=np.array(data["emission_energy"]),
        rixs_map=np.array(data["rixs_map"]),
    )

command_line_runner()

Run the RIXS Visualizer from the command line.

Source code in spectrafit/plugins/rixs_visualizer.py
Python
def command_line_runner() -> None:
    """Run the RIXS Visualizer from the command line."""
    RIXSVisualizer()()