Merge branch 'master' into halvor_xanes
This commit is contained in:
commit
28b807c9f9
18 changed files with 662 additions and 261 deletions
20
docs/Makefile
Normal file
20
docs/Makefile
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
# Minimal makefile for Sphinx documentation
|
||||
#
|
||||
|
||||
# You can set these variables from the command line, and also
|
||||
# from the environment for the first two.
|
||||
SPHINXOPTS ?=
|
||||
SPHINXBUILD ?= sphinx-build
|
||||
SOURCEDIR = .
|
||||
BUILDDIR = _build
|
||||
|
||||
# Put it first so that "make" without argument is like "make help".
|
||||
help:
|
||||
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||
|
||||
.PHONY: help Makefile
|
||||
|
||||
# Catch-all target: route all unknown targets to Sphinx using the new
|
||||
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
|
||||
%: Makefile
|
||||
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||
9
docs/about.md
Normal file
9
docs/about.md
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
# About
|
||||
|
||||
This package contains data processing, analysis and viewing tools written in Python for several different activities related to inorganic materials chemistry conducted in the NAFUMA-group at the University of Oslo. It is written with the intention of creating a reproducible workflow for documentation purposes, with a focus on interactivity in the data exploration process.
|
||||
|
||||
As of now (08-04-22), the intention is to include tools for XRD-, XANES- and electrochemistry-analysis, however other modules might be added as well.
|
||||
|
||||
|
||||
|
||||
|
||||
57
docs/conf.py
Normal file
57
docs/conf.py
Normal file
|
|
@ -0,0 +1,57 @@
|
|||
# Configuration file for the Sphinx documentation builder.
|
||||
#
|
||||
# This file only contains a selection of the most common options. For a full
|
||||
# list see the documentation:
|
||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html
|
||||
|
||||
# -- Path setup --------------------------------------------------------------
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
#
|
||||
# import os
|
||||
# import sys
|
||||
# sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
|
||||
# -- Project information -----------------------------------------------------
|
||||
|
||||
project = 'NAFUMA'
|
||||
copyright = '2022, Rasmus Vester Thøgersen & Halvor Høen Hval'
|
||||
author = 'Rasmus Vester Thøgersen & Halvor Høen Hval'
|
||||
|
||||
# The full version, including alpha/beta/rc tags
|
||||
release = '0.2'
|
||||
|
||||
|
||||
# -- General configuration ---------------------------------------------------
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||
# ones.
|
||||
extensions = ['myst_parser']
|
||||
source_suffix = ['.rst', '.md']
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ['_templates']
|
||||
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
# This pattern also affects html_static_path and html_extra_path.
|
||||
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
|
||||
|
||||
|
||||
# -- Options for HTML output -------------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
#
|
||||
html_theme = 'sphinx_rtd_theme'
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ['_static']
|
||||
|
||||
html_sidebars = {'**': ['globaltoc.html', 'relations.html', 'sourcelink.html', 'searchbox.html']}
|
||||
22
docs/index.rst
Normal file
22
docs/index.rst
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
.. NAFUMA documentation master file, created by
|
||||
sphinx-quickstart on Fri Apr 8 15:32:14 2022.
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
|
||||
Welcome to NAFUMA's documentation!
|
||||
==================================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:caption: Contents:
|
||||
|
||||
about
|
||||
installation
|
||||
modules/modules
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
||||
25
docs/installation.md
Normal file
25
docs/installation.md
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
# Installation
|
||||
|
||||
This package is not available on any package repositories, but can be installed by cloning the repository from GitHub and installing via ```pip install``` from the root folder:
|
||||
|
||||
```
|
||||
$ git clone git@github.com:rasmusthog/nafuma.git
|
||||
$ cd nafuma
|
||||
$ pip install .
|
||||
```
|
||||
If you are planning on making changes to the code base, you might want to consider installing it in develop-mode in order for changes to take effect without reinstalling by including the ```-e``` flag:
|
||||
|
||||
```
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
As of now (v0.2, 08-04-22), the installer will not install any dependencies. It is recommended that you use `conda` to create an environment from `environment.yml` in the root folder:
|
||||
|
||||
```
|
||||
$ conda env create --name <your_environment_name_here> --file environment.yml
|
||||
$ conda activate <your_environment_name_here>
|
||||
```
|
||||
|
||||
(remember to also get rid of <> when substituting your environment name).
|
||||
|
||||
This should get you up and running!
|
||||
35
docs/make.bat
Normal file
35
docs/make.bat
Normal file
|
|
@ -0,0 +1,35 @@
|
|||
@ECHO OFF
|
||||
|
||||
pushd %~dp0
|
||||
|
||||
REM Command file for Sphinx documentation
|
||||
|
||||
if "%SPHINXBUILD%" == "" (
|
||||
set SPHINXBUILD=sphinx-build
|
||||
)
|
||||
set SOURCEDIR=.
|
||||
set BUILDDIR=_build
|
||||
|
||||
if "%1" == "" goto help
|
||||
|
||||
%SPHINXBUILD% >NUL 2>NUL
|
||||
if errorlevel 9009 (
|
||||
echo.
|
||||
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
|
||||
echo.installed, then set the SPHINXBUILD environment variable to point
|
||||
echo.to the full path of the 'sphinx-build' executable. Alternatively you
|
||||
echo.may add the Sphinx directory to PATH.
|
||||
echo.
|
||||
echo.If you don't have Sphinx installed, grab it from
|
||||
echo.https://www.sphinx-doc.org/
|
||||
exit /b 1
|
||||
)
|
||||
|
||||
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
|
||||
goto end
|
||||
|
||||
:help
|
||||
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
|
||||
|
||||
:end
|
||||
popd
|
||||
3
docs/modules/electrochemistry.md
Normal file
3
docs/modules/electrochemistry.md
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
# Electrochemistry
|
||||
|
||||
This is a placeholder
|
||||
12
docs/modules/modules.rst
Normal file
12
docs/modules/modules.rst
Normal file
|
|
@ -0,0 +1,12 @@
|
|||
Modules
|
||||
==================================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: Contents
|
||||
|
||||
xrd.md
|
||||
xanes.md
|
||||
electrochemistry.md
|
||||
|
||||
|
||||
1
docs/modules/xanes.md
Normal file
1
docs/modules/xanes.md
Normal file
|
|
@ -0,0 +1 @@
|
|||
# XANES
|
||||
130
docs/modules/xrd.md
Normal file
130
docs/modules/xrd.md
Normal file
|
|
@ -0,0 +1,130 @@
|
|||
# XRD
|
||||
|
||||
This module contains functions to view diffractogram data from several different sources. The Some features include:
|
||||
|
||||
- Allows the user to plot the data in wavelength independent parameters (d, 1/d, q, q{math}`^2`, q{math}`^4`), or translated to CuK{math}`\alpha` or MoK{math}`\alpha` allowing comparison between diffractograms obtained with different wavelengths
|
||||
- Plotting in interactive mode within Jupyter Notebook using the `ipywidgets`-package allowing real-time change of (certain) parameters
|
||||
- Plotting reflection ticks and/or reflection indices from multiple simulated reflection tables (generated by VESTA) for comparison
|
||||
- Plotting series of diffractograms in stacked mode (including ability to rotate the view for a 3D-view) or as a heatmap
|
||||
|
||||
|
||||
|
||||
## 1 Compatible file formats
|
||||
|
||||
The module is partially built as a wrapper around [pyFAI](https://github.com/silx-kit/pyFAI) (Fast Azimuthal Integrator) developed at the ESRF for integrating 2D diffractograms from the detectors they have. Given a suitable calibration file (`.poni`), the XRD-module will automatically integrate any file pyFAI can integrate. Upon running in interactive mode, the integration is only done once, but it is advised to perform integration of many diffractograms in a separate processing step and saving the results as `.xy`-files, as the integration will run again each time the function is called.
|
||||
|
||||
In addition to this, it can also read the `.brml`-files produced by Bruker-instruments in the RECX-lab at the University of Oslo.
|
||||
|
||||
## 2 Basic usage
|
||||
|
||||
Plotting diffractograms is done by calling the `xrd.plot.plot_diffractogram()`-function, which takes two dictionaries as arguments: `data`, containing all data specific information and `options` which allows customisation of a range of different parameters. The `options`-argument is optional, and the function will contains a bunch of default values to make an as good plot as possible to begin with.
|
||||
|
||||
**Example #1: Single diffractogram**
|
||||
|
||||
```py
|
||||
import nafuma.xrd as xrd
|
||||
|
||||
data = {
|
||||
'path': 'path/to/data/diffractogram.brml'
|
||||
}
|
||||
|
||||
options = {
|
||||
'reflections_data': [
|
||||
{'path': 'reflections_phase_1.txt', 'min_alpha': 0.1, 'reflection_indices': 4, 'label': 'Phase 1', 'text_colour': 'black'},
|
||||
{'path': 'reflections_phase_2.txt', 'min_alpha': 0.1, 'reflections_indices': 4, 'label': 'Phase 2', 'text_colour': 'red'}
|
||||
],
|
||||
'hide_y_ticklabels': True,
|
||||
'hide_y_ticks': True
|
||||
}
|
||||
|
||||
|
||||
diff, fig, ax = xrd.plot.plot_diffractogram(data=data, options=options)
|
||||
```
|
||||
|
||||
The return value `diff` is a list containing one `pandas.DataFrame` per diffractogram passed, in the above example only one. `fig` and `ax` are `matplotlib.pyplot.Figure`- and `matplotlib.pyplot.Axes`-objects, respectively.
|
||||
|
||||
**Example #2: 2D diffractogram from ESRF requiring integration**
|
||||
|
||||
```py
|
||||
import nafuma.xrd as xrd
|
||||
|
||||
data = {
|
||||
'path': 'path/to/data/2d_diffractogram.edf',
|
||||
'calibrant': 'path/to/calibrant/calibrant.poni',
|
||||
'nbins': 3000
|
||||
}
|
||||
|
||||
diff, _ = xrd.plot.plot_diffractogram(data=data, options=options)
|
||||
```
|
||||
|
||||
In this case we did not specify any options and will thus only use default values, and we stored both `fig` and `ax` in the variable `_` as we do not intend to use these.
|
||||
|
||||
**Example #3: Plotting with interactive mode**
|
||||
|
||||
This will can be done within a Jupyter Notebook, and will allow the user to tweak certain parameters real-time instead of having to recall the function every time.
|
||||
|
||||
```py
|
||||
import nafuma.xrd as xrd
|
||||
|
||||
data = {
|
||||
'path': 'path/to/data/diffractogram.brml'
|
||||
}
|
||||
|
||||
options = {
|
||||
'interactive': True
|
||||
}
|
||||
|
||||
|
||||
diff, _ = xrd.plot.plot_diffractogram(data=data, options=options)
|
||||
```
|
||||
|
||||
**Example #4: Plotting multiple diffractograms as stacked plots**
|
||||
|
||||
Instead of passing just a string, you can pass a lsit of filenames. This will be plotted sequentially, with offsets, if desired (`offset_x` and `offset_y`). Default values of `offset_y` is 1 if less than 10 diffractograms have been passed, and 0.1 if more than 10 diffractograms are passed. When plotting series data (e.g. from *in situ* or *operando* measurements), a smaller offset is suitable. Keep in mind that these values only makes sense when the diffractograms are normalised (`'normalise': True`) - if not, the default offsets will be way too small to be noticeable.
|
||||
|
||||
```py
|
||||
import nafuma.xrd as xrd
|
||||
|
||||
data = {
|
||||
'path': ['path/to/data/diffractogram_1.brml', 'path/to/data/diffractogram_2.brml']
|
||||
}
|
||||
|
||||
|
||||
options = {
|
||||
'offset_y': 0.1,
|
||||
'offset_x': 0.05,
|
||||
}
|
||||
|
||||
|
||||
diff, _ = xrd.plot.plot_diffractogram(data=data, options=options)
|
||||
```
|
||||
|
||||
|
||||
**Example #5: Plotting series data as heatmap**
|
||||
|
||||
This differs very little from above, except that heatmaps are probably nonesense if not used on series data, and that you don't want offset in heatmaps.
|
||||
|
||||
```py
|
||||
import nafuma.xrd as xrd
|
||||
|
||||
list_of_data = ['data_1.brml', 'data_2.brml'. ...., 'data_n.brml']
|
||||
|
||||
data = {
|
||||
'path': lists_of_data
|
||||
}
|
||||
|
||||
|
||||
options = {
|
||||
'heatmap': True
|
||||
}
|
||||
|
||||
|
||||
diff, _ = xrd.plot.plot_diffractogram(data=data, options=options)
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,40 +1,28 @@
|
|||
from email.policy import default
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
import os
|
||||
|
||||
import nafuma.auxillary as aux
|
||||
from sympy import re
|
||||
|
||||
def read_data(path, kind, options=None):
|
||||
def read_data(data, options=None):
|
||||
|
||||
if kind == 'neware':
|
||||
df = read_neware(path)
|
||||
cycles = process_neware_data(df, options=options)
|
||||
if data['kind'] == 'neware':
|
||||
df = read_neware(data['path'])
|
||||
cycles = process_neware_data(df=df, options=options)
|
||||
|
||||
elif kind == 'batsmall':
|
||||
df = read_batsmall(path)
|
||||
elif data['kind'] == 'batsmall':
|
||||
df = read_batsmall(data['path'])
|
||||
cycles = process_batsmall_data(df=df, options=options)
|
||||
|
||||
elif kind == 'biologic':
|
||||
df = read_biologic(path)
|
||||
elif data['kind'] == 'biologic':
|
||||
df = read_biologic(data['path'])
|
||||
cycles = process_biologic_data(df=df, options=options)
|
||||
|
||||
return cycles
|
||||
|
||||
def read_batsmall(path):
|
||||
''' Reads BATSMALL-data into a DataFrame.
|
||||
|
||||
Input:
|
||||
path (required): string with path to datafile
|
||||
|
||||
Output:
|
||||
df: pandas DataFrame containing the data as-is, but without additional NaN-columns.'''
|
||||
|
||||
df = pd.read_csv(path, skiprows=2, sep='\t')
|
||||
df = df.loc[:, ~df.columns.str.contains('^Unnamed')]
|
||||
|
||||
return df
|
||||
|
||||
|
||||
|
||||
|
||||
def read_neware(path, summary=False):
|
||||
|
|
@ -43,6 +31,8 @@ def read_neware(path, summary=False):
|
|||
type is .csv, it will just open the datafile and it does not matter if summary is False or not.'''
|
||||
from xlsx2csv import Xlsx2csv
|
||||
|
||||
# FIXME Do a check if a .csv-file already exists even if the .xlsx is passed
|
||||
|
||||
# Convert from .xlsx to .csv to make readtime faster
|
||||
if path.split('.')[-1] == 'xlsx':
|
||||
csv_details = ''.join(path.split('.')[:-1]) + '_details.csv'
|
||||
|
|
@ -66,6 +56,20 @@ def read_neware(path, summary=False):
|
|||
return df
|
||||
|
||||
|
||||
def read_batsmall(path):
|
||||
''' Reads BATSMALL-data into a DataFrame.
|
||||
|
||||
Input:
|
||||
path (required): string with path to datafile
|
||||
|
||||
Output:
|
||||
df: pandas DataFrame containing the data as-is, but without additional NaN-columns.'''
|
||||
|
||||
df = pd.read_csv(path, skiprows=2, sep='\t')
|
||||
df = df.loc[:, ~df.columns.str.contains('^Unnamed')]
|
||||
|
||||
return df
|
||||
|
||||
|
||||
def read_biologic(path):
|
||||
''' Reads Bio-Logic-data into a DataFrame.
|
||||
|
|
@ -76,23 +80,19 @@ def read_biologic(path):
|
|||
Output:
|
||||
df: pandas DataFrame containing the data as-is, but without additional NaN-columns.'''
|
||||
|
||||
with open(path, 'r') as f:
|
||||
with open(path, 'rb') as f:
|
||||
lines = f.readlines()
|
||||
|
||||
header_lines = int(lines[1].split()[-1]) - 1
|
||||
|
||||
|
||||
df = pd.read_csv(path, sep='\t', skiprows=header_lines)
|
||||
df = pd.read_csv(path, sep='\t', skiprows=header_lines, encoding='cp1252')
|
||||
df.dropna(inplace=True, axis=1)
|
||||
|
||||
return df
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def process_batsmall_data(df, options=None):
|
||||
''' Takes BATSMALL-data in the form of a DataFrame and cleans the data up and converts units into desired units.
|
||||
Splits up into individual charge and discharge DataFrames per cycle, and outputs a list where each element is a tuple with the Chg and DChg-data. E.g. cycles[10][0] gives the charge data for the 11th cycle.
|
||||
|
|
@ -111,26 +111,25 @@ def process_batsmall_data(df, options=None):
|
|||
'''
|
||||
|
||||
required_options = ['splice_cycles', 'molecular_weight', 'reverse_discharge', 'units']
|
||||
default_options = {'splice_cycles': False, 'molecular_weight': None, 'reverse_discharge': False, 'units': None}
|
||||
|
||||
if not options:
|
||||
options = default_options
|
||||
else:
|
||||
for option in required_options:
|
||||
if option not in options.keys():
|
||||
options[option] = default_options[option]
|
||||
default_options = {
|
||||
'splice_cycles': False,
|
||||
'molecular_weight': None,
|
||||
'reverse_discharge': False,
|
||||
'units': None}
|
||||
|
||||
|
||||
aux.update_options(options=options, required_options=required_options, default_options=default_options)
|
||||
options['kind'] = 'batsmall'
|
||||
|
||||
# Complete set of new units and get the units used in the dataset, and convert values in the DataFrame from old to new.
|
||||
new_units = set_units(units=options['units'])
|
||||
old_units = get_old_units(df, kind='batsmall')
|
||||
df = unit_conversion(df=df, new_units=new_units, old_units=old_units, kind='batsmall')
|
||||
|
||||
options['units'] = new_units
|
||||
set_units(options)
|
||||
options['old_units'] = get_old_units(df, options)
|
||||
|
||||
df = unit_conversion(df=df, options=options)
|
||||
|
||||
if options['splice_cycles']:
|
||||
df = splice_cycles(df=df, kind='batsmall')
|
||||
df = splice_cycles(df=df, options=options)
|
||||
|
||||
# Replace NaN with empty string in the Comment-column and then remove all steps where the program changes - this is due to inconsistent values for current
|
||||
df[["comment"]] = df[["comment"]].fillna(value={'comment': ''})
|
||||
|
|
@ -173,27 +172,23 @@ def process_batsmall_data(df, options=None):
|
|||
cycles.append((chg_df, dchg_df))
|
||||
|
||||
|
||||
|
||||
|
||||
return cycles
|
||||
|
||||
|
||||
def splice_cycles(df, kind):
|
||||
def splice_cycles(df, options: dict) -> pd.DataFrame:
|
||||
''' Splices two cycles together - if e.g. one charge cycle are split into several cycles due to change in parameters.
|
||||
|
||||
if kind == 'batsmall':
|
||||
Incomplete, only accomodates BatSmall so far, and only for charge.'''
|
||||
|
||||
if options['kind'] == 'batsmall':
|
||||
|
||||
# Creates masks for charge and discharge curves
|
||||
chg_mask = df['current'] >= 0
|
||||
dchg_mask = df['current'] < 0
|
||||
|
||||
# Get the number of cycles in the dataset
|
||||
max_count = df["count"].max()
|
||||
|
||||
# Loop through all the cycling steps, change the current and capacities in the
|
||||
for i in range(df["count"].max()):
|
||||
sub_df = df.loc[df['count'] == i+1]
|
||||
sub_df_chg = sub_df.loc[chg_mask]
|
||||
#sub_df_dchg = sub_df.loc[dchg_mask]
|
||||
|
||||
# get indices where the program changed
|
||||
chg_indices = sub_df_chg[sub_df_chg["comment"].str.contains("program")==True].index.to_list()
|
||||
|
|
@ -206,34 +201,17 @@ def splice_cycles(df, kind):
|
|||
if chg_indices:
|
||||
last_chg = chg_indices.pop()
|
||||
|
||||
|
||||
#dchg_indices = sub_df_dchg[sub_df_dchg["comment"].str.contains("program")==True].index.to_list()
|
||||
#if dchg_indices:
|
||||
# del dchg_indices[0]
|
||||
|
||||
|
||||
|
||||
if chg_indices:
|
||||
for i in chg_indices:
|
||||
add = df['specific_capacity'].iloc[i-1]
|
||||
df['specific_capacity'].iloc[i:last_chg] = df['specific_capacity'].iloc[i:last_chg] + add
|
||||
|
||||
#if dchg_indices:
|
||||
# for i in dchg_indices:
|
||||
# add = df['specific_capacity'].iloc[i-1]
|
||||
# df['specific_capacity'].iloc[i:last_dchg] = df['specific_capacity'].iloc[i:last_dchg] + add
|
||||
|
||||
|
||||
|
||||
|
||||
return df
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def process_neware_data(df, options=None):
|
||||
def process_neware_data(df, options={}):
|
||||
|
||||
""" Takes data from NEWARE in a DataFrame as read by read_neware() and converts units, adds columns and splits into cycles.
|
||||
|
||||
|
|
@ -245,25 +223,26 @@ def process_neware_data(df, options=None):
|
|||
molecular_weight: the molar mass (in g mol^-1) of the active material, to calculate the number of ions extracted. Assumes one electron per Li+/Na+-ion """
|
||||
|
||||
required_options = ['units', 'active_material_weight', 'molecular_weight', 'reverse_discharge', 'splice_cycles']
|
||||
default_options = {'units': None, 'active_material_weight': None, 'molecular_weight': None, 'reverse_discharge': False, 'splice_cycles': None}
|
||||
|
||||
if not options:
|
||||
options = default_options
|
||||
else:
|
||||
for option in required_options:
|
||||
if option not in options.keys():
|
||||
options[option] = default_options[option]
|
||||
default_options = {
|
||||
'units': None,
|
||||
'active_material_weight': None,
|
||||
'molecular_weight': None,
|
||||
'reverse_discharge': False,
|
||||
'splice_cycles': None}
|
||||
|
||||
|
||||
aux.update_options(options=options, required_options=required_options, default_options=default_options)
|
||||
options['kind'] = 'neware'
|
||||
|
||||
|
||||
# Complete set of new units and get the units used in the dataset, and convert values in the DataFrame from old to new.
|
||||
new_units = set_units(units=options['units'])
|
||||
old_units = get_old_units(df=df, kind='neware')
|
||||
set_units(options=options) # sets options['units']
|
||||
options['old_units'] = get_old_units(df=df, options=options)
|
||||
|
||||
df = add_columns(df=df, active_material_weight=options['active_material_weight'], molecular_weight=options['molecular_weight'], old_units=old_units, kind='neware')
|
||||
df = add_columns(df=df, options=options) # adds columns to the DataFrame if active material weight and/or molecular weight has been passed in options
|
||||
|
||||
df = unit_conversion(df=df, new_units=new_units, old_units=old_units, kind='neware')
|
||||
|
||||
options['units'] = new_units
|
||||
df = unit_conversion(df=df, options=options) # converts all units from the old units to the desired units
|
||||
|
||||
|
||||
# Creates masks for charge and discharge curves
|
||||
|
|
@ -288,6 +267,8 @@ def process_neware_data(df, options=None):
|
|||
if chg_df.empty and dchg_df.empty:
|
||||
continue
|
||||
|
||||
|
||||
# Reverses the discharge curve if specified
|
||||
if options['reverse_discharge']:
|
||||
max_capacity = dchg_df['capacity'].max()
|
||||
dchg_df['capacity'] = np.abs(dchg_df['capacity'] - max_capacity)
|
||||
|
|
@ -310,35 +291,34 @@ def process_neware_data(df, options=None):
|
|||
def process_biologic_data(df, options=None):
|
||||
|
||||
required_options = ['units', 'active_material_weight', 'molecular_weight', 'reverse_discharge', 'splice_cycles']
|
||||
default_options = {'units': None, 'active_material_weight': None, 'molecular_weight': None, 'reverse_discharge': False, 'splice_cycles': None}
|
||||
|
||||
if not options:
|
||||
options = default_options
|
||||
else:
|
||||
for option in required_options:
|
||||
if option not in options.keys():
|
||||
options[option] = default_options[option]
|
||||
default_options = {
|
||||
'units': None,
|
||||
'active_material_weight': None,
|
||||
'molecular_weight': None,
|
||||
'reverse_discharge': False,
|
||||
'splice_cycles': None}
|
||||
|
||||
|
||||
aux.update_options(options=options, required_options=required_options, default_options=default_options)
|
||||
options['kind'] = 'biologic'
|
||||
|
||||
# Pick out necessary columns
|
||||
df = df[['Ns changes', 'Ns', 'time/s', 'Ewe/V', 'Energy charge/W.h', 'Energy discharge/W.h', '<I>/mA', 'Capacity/mA.h', 'cycle number']].copy()
|
||||
|
||||
# Complete set of new units and get the units used in the dataset, and convert values in the DataFrame from old to new.
|
||||
new_units = set_units(units=options['units'])
|
||||
old_units = get_old_units(df=df, kind='biologic')
|
||||
set_units(options)
|
||||
options['old_units'] = get_old_units(df=df, options=options)
|
||||
|
||||
df = add_columns(df=df, active_material_weight=options['active_material_weight'], molecular_weight=options['molecular_weight'], old_units=old_units, kind='biologic')
|
||||
|
||||
df = unit_conversion(df=df, new_units=new_units, old_units=old_units, kind='biologic')
|
||||
|
||||
options['units'] = new_units
|
||||
df = add_columns(df=df, options=options)
|
||||
|
||||
df = unit_conversion(df=df, options=options)
|
||||
|
||||
# Creates masks for charge and discharge curves
|
||||
chg_mask = (df['status'] == 1) & (df['status_change'] != 1)
|
||||
dchg_mask = (df['status'] == 2) & (df['status_change'] != 1)
|
||||
|
||||
|
||||
|
||||
# Initiate cycles list
|
||||
cycles = []
|
||||
|
||||
|
|
@ -376,62 +356,62 @@ def process_biologic_data(df, options=None):
|
|||
return cycles
|
||||
|
||||
|
||||
def add_columns(df, active_material_weight, molecular_weight, old_units, kind):
|
||||
def add_columns(df, options):
|
||||
|
||||
if kind == 'neware':
|
||||
if active_material_weight:
|
||||
df["SpecificCapacity({}/mg)".format(old_units["capacity"])] = df["Capacity({})".format(old_units['capacity'])] / (active_material_weight)
|
||||
if options['kind'] == 'neware':
|
||||
if options['active_material_weight']:
|
||||
df["SpecificCapacity({}/mg)".format(options['old_units']["capacity"])] = df["Capacity({})".format(options['old_units']['capacity'])] / (options['active_material_weight'])
|
||||
|
||||
if molecular_weight:
|
||||
if options['molecular_weight']:
|
||||
faradays_constant = 96485.3365 # [F] = C mol^-1 = As mol^-1
|
||||
seconds_per_hour = 3600 # s h^-1
|
||||
f = faradays_constant / seconds_per_hour * 1000.0 # [f] = mAh mol^-1
|
||||
|
||||
df["IonsExtracted"] = (df["SpecificCapacity({}/mg)".format(old_units['capacity'])]*molecular_weight)*1000/f
|
||||
df["IonsExtracted"] = (df["SpecificCapacity({}/mg)".format(options['old_units']['capacity'])]*options['molecular_weight'])*1000/f
|
||||
|
||||
|
||||
if kind == 'biologic':
|
||||
if active_material_weight:
|
||||
if options['kind'] == 'biologic':
|
||||
if options['active_material_weight']:
|
||||
|
||||
capacity = old_units['capacity'].split('h')[0] + '.h'
|
||||
capacity = options['old_units']['capacity'].split('h')[0] + '.h'
|
||||
|
||||
|
||||
df["SpecificCapacity({}/mg)".format(old_units["capacity"])] = df["Capacity/{}".format(capacity)] / (active_material_weight)
|
||||
df["SpecificCapacity({}/mg)".format(options['old_units']["capacity"])] = df["Capacity/{}".format(capacity)] / (options['active_material_weight'])
|
||||
|
||||
if molecular_weight:
|
||||
if options['molecular_weight']:
|
||||
faradays_constant = 96485.3365 # [F] = C mol^-1 = As mol^-1
|
||||
seconds_per_hour = 3600 # s h^-1
|
||||
f = faradays_constant / seconds_per_hour * 1000.0 # [f] = mAh mol^-1
|
||||
|
||||
df["IonsExtracted"] = (df["SpecificCapacity({}/mg)".format(old_units['capacity'])]*molecular_weight)*1000/f
|
||||
df["IonsExtracted"] = (df["SpecificCapacity({}/mg)".format(options['old_units']['capacity'])]*options['molecular_weight'])*1000/f
|
||||
|
||||
return df
|
||||
|
||||
|
||||
def unit_conversion(df, new_units, old_units, kind):
|
||||
def unit_conversion(df, options):
|
||||
from . import unit_tables
|
||||
|
||||
if kind == 'batsmall':
|
||||
if options['kind'] == 'batsmall':
|
||||
|
||||
df["TT [{}]".format(old_units["time"])] = df["TT [{}]".format(old_units["time"])] * unit_tables.time()[old_units["time"]].loc[new_units['time']]
|
||||
df["U [{}]".format(old_units["voltage"])] = df["U [{}]".format(old_units["voltage"])] * unit_tables.voltage()[old_units["voltage"]].loc[new_units['voltage']]
|
||||
df["I [{}]".format(old_units["current"])] = df["I [{}]".format(old_units["current"])] * unit_tables.current()[old_units["current"]].loc[new_units['current']]
|
||||
df["C [{}/{}]".format(old_units["capacity"], old_units["mass"])] = df["C [{}/{}]".format(old_units["capacity"], old_units["mass"])] * (unit_tables.capacity()[old_units["capacity"]].loc[new_units["capacity"]] / unit_tables.mass()[old_units["mass"]].loc[new_units["mass"]])
|
||||
df["TT [{}]".format(options['old_units']["time"])] = df["TT [{}]".format(options['old_units']["time"])] * unit_tables.time()[options['old_units']["time"]].loc[options['units']['time']]
|
||||
df["U [{}]".format(options['old_units']["voltage"])] = df["U [{}]".format(options['old_units']["voltage"])] * unit_tables.voltage()[options['old_units']["voltage"]].loc[options['units']['voltage']]
|
||||
df["I [{}]".format(options['old_units']["current"])] = df["I [{}]".format(options['old_units']["current"])] * unit_tables.current()[options['old_units']["current"]].loc[options['units']['current']]
|
||||
df["C [{}/{}]".format(options['old_units']["capacity"], options['old_units']["mass"])] = df["C [{}/{}]".format(options['old_units']["capacity"], options['old_units']["mass"])] * (unit_tables.capacity()[options['old_units']["capacity"]].loc[options['units']["capacity"]] / unit_tables.mass()[options['old_units']["mass"]].loc[options['units']["mass"]])
|
||||
|
||||
df.columns = ['time', 'voltage', 'current', 'count', 'specific_capacity', 'comment']
|
||||
|
||||
|
||||
if kind == 'neware':
|
||||
df['Current({})'.format(old_units['current'])] = df['Current({})'.format(old_units['current'])] * unit_tables.current()[old_units['current']].loc[new_units['current']]
|
||||
df['Voltage({})'.format(old_units['voltage'])] = df['Voltage({})'.format(old_units['voltage'])] * unit_tables.voltage()[old_units['voltage']].loc[new_units['voltage']]
|
||||
df['Capacity({})'.format(old_units['capacity'])] = df['Capacity({})'.format(old_units['capacity'])] * unit_tables.capacity()[old_units['capacity']].loc[new_units['capacity']]
|
||||
df['Energy({})'.format(old_units['energy'])] = df['Energy({})'.format(old_units['energy'])] * unit_tables.energy()[old_units['energy']].loc[new_units['energy']]
|
||||
df['CycleTime({})'.format(new_units['time'])] = df.apply(lambda row : convert_time_string(row['Relative Time(h:min:s.ms)'], unit=new_units['time']), axis=1)
|
||||
df['RunTime({})'.format(new_units['time'])] = df.apply(lambda row : convert_datetime_string(row['Real Time(h:min:s.ms)'], reference=df['Real Time(h:min:s.ms)'].iloc[0], unit=new_units['time']), axis=1)
|
||||
if options['kind'] == 'neware':
|
||||
df['Current({})'.format(options['old_units']['current'])] = df['Current({})'.format(options['old_units']['current'])] * unit_tables.current()[options['old_units']['current']].loc[options['units']['current']]
|
||||
df['Voltage({})'.format(options['old_units']['voltage'])] = df['Voltage({})'.format(options['old_units']['voltage'])] * unit_tables.voltage()[options['old_units']['voltage']].loc[options['units']['voltage']]
|
||||
df['Capacity({})'.format(options['old_units']['capacity'])] = df['Capacity({})'.format(options['old_units']['capacity'])] * unit_tables.capacity()[options['old_units']['capacity']].loc[options['units']['capacity']]
|
||||
df['Energy({})'.format(options['old_units']['energy'])] = df['Energy({})'.format(options['old_units']['energy'])] * unit_tables.energy()[options['old_units']['energy']].loc[options['units']['energy']]
|
||||
df['CycleTime({})'.format(options['units']['time'])] = df.apply(lambda row : convert_time_string(row['Relative Time(h:min:s.ms)'], unit=options['units']['time']), axis=1)
|
||||
df['RunTime({})'.format(options['units']['time'])] = df.apply(lambda row : convert_datetime_string(row['Real Time(h:min:s.ms)'], reference=df['Real Time(h:min:s.ms)'].iloc[0], unit=options['units']['time']), axis=1)
|
||||
columns = ['status', 'jump', 'cycle', 'steps', 'current', 'voltage', 'capacity', 'energy']
|
||||
|
||||
if 'SpecificCapacity({}/mg)'.format(old_units['capacity']) in df.columns:
|
||||
df['SpecificCapacity({}/mg)'.format(old_units['capacity'])] = df['SpecificCapacity({}/mg)'.format(old_units['capacity'])] * unit_tables.capacity()[old_units['capacity']].loc[new_units['capacity']] / unit_tables.mass()['mg'].loc[new_units["mass"]]
|
||||
if 'SpecificCapacity({}/mg)'.format(options['old_units']['capacity']) in df.columns:
|
||||
df['SpecificCapacity({}/mg)'.format(options['old_units']['capacity'])] = df['SpecificCapacity({}/mg)'.format(options['old_units']['capacity'])] * unit_tables.capacity()[options['old_units']['capacity']].loc[options['units']['capacity']] / unit_tables.mass()['mg'].loc[options['units']["mass"]]
|
||||
columns.append('specific_capacity')
|
||||
|
||||
if 'IonsExtracted' in df.columns:
|
||||
|
|
@ -447,18 +427,18 @@ def unit_conversion(df, new_units, old_units, kind):
|
|||
|
||||
df.columns = columns
|
||||
|
||||
if kind == 'biologic':
|
||||
df['time/{}'.format(old_units['time'])] = df["time/{}".format(old_units["time"])] * unit_tables.time()[old_units["time"]].loc[new_units['time']]
|
||||
df["Ewe/{}".format(old_units["voltage"])] = df["Ewe/{}".format(old_units["voltage"])] * unit_tables.voltage()[old_units["voltage"]].loc[new_units['voltage']]
|
||||
df["<I>/{}".format(old_units["current"])] = df["<I>/{}".format(old_units["current"])] * unit_tables.current()[old_units["current"]].loc[new_units['current']]
|
||||
if options['kind'] == 'biologic':
|
||||
df['time/{}'.format(options['old_units']['time'])] = df["time/{}".format(options['old_units']["time"])] * unit_tables.time()[options['old_units']["time"]].loc[options['units']['time']]
|
||||
df["Ewe/{}".format(options['old_units']["voltage"])] = df["Ewe/{}".format(options['old_units']["voltage"])] * unit_tables.voltage()[options['old_units']["voltage"]].loc[options['units']['voltage']]
|
||||
df["<I>/{}".format(options['old_units']["current"])] = df["<I>/{}".format(options['old_units']["current"])] * unit_tables.current()[options['old_units']["current"]].loc[options['units']['current']]
|
||||
|
||||
capacity = old_units['capacity'].split('h')[0] + '.h'
|
||||
df["Capacity/{}".format(capacity)] = df["Capacity/{}".format(capacity)] * (unit_tables.capacity()[old_units["capacity"]].loc[new_units["capacity"]])
|
||||
capacity = options['old_units']['capacity'].split('h')[0] + '.h'
|
||||
df["Capacity/{}".format(capacity)] = df["Capacity/{}".format(capacity)] * (unit_tables.capacity()[options['old_units']["capacity"]].loc[options['units']["capacity"]])
|
||||
|
||||
columns = ['status_change', 'status', 'time', 'voltage', 'energy_charge', 'energy_discharge', 'current', 'capacity', 'cycle']
|
||||
|
||||
if 'SpecificCapacity({}/mg)'.format(old_units['capacity']) in df.columns:
|
||||
df['SpecificCapacity({}/mg)'.format(old_units['capacity'])] = df['SpecificCapacity({}/mg)'.format(old_units['capacity'])] * unit_tables.capacity()[old_units['capacity']].loc[new_units['capacity']] / unit_tables.mass()['mg'].loc[new_units["mass"]]
|
||||
if 'SpecificCapacity({}/mg)'.format(options['old_units']['capacity']) in df.columns:
|
||||
df['SpecificCapacity({}/mg)'.format(options['old_units']['capacity'])] = df['SpecificCapacity({}/mg)'.format(options['old_units']['capacity'])] * unit_tables.capacity()[options['old_units']['capacity']].loc[options['units']['capacity']] / unit_tables.mass()['mg'].loc[options['units']["mass"]]
|
||||
columns.append('specific_capacity')
|
||||
|
||||
if 'IonsExtracted' in df.columns:
|
||||
|
|
@ -469,37 +449,42 @@ def unit_conversion(df, new_units, old_units, kind):
|
|||
return df
|
||||
|
||||
|
||||
def set_units(units=None):
|
||||
def set_units(options: dict) -> None:
|
||||
|
||||
# Complete the list of units - if not all are passed, then default value will be used
|
||||
required_units = ['time', 'current', 'voltage', 'capacity', 'mass', 'energy', 'specific_capacity']
|
||||
default_units = {'time': 'h', 'current': 'mA', 'voltage': 'V', 'capacity': 'mAh', 'mass': 'g', 'energy': 'mWh', 'specific_capacity': None}
|
||||
|
||||
if not units:
|
||||
units = default_units
|
||||
default_units = {
|
||||
'time': 'h',
|
||||
'current': 'mA',
|
||||
'voltage': 'V',
|
||||
'capacity': 'mAh',
|
||||
'mass': 'g',
|
||||
'energy': 'mWh',
|
||||
'specific_capacity': None}
|
||||
|
||||
if units:
|
||||
for unit in required_units:
|
||||
if unit not in units.keys():
|
||||
units[unit] = default_units[unit]
|
||||
|
||||
units['specific_capacity'] = r'{} {}'.format(units['capacity'], units['mass']) + '$^{-1}$'
|
||||
if not options['units']:
|
||||
options['units'] = default_units
|
||||
|
||||
|
||||
return units
|
||||
aux.update_options(options=options['units'], required_options=required_units, default_options=default_units)
|
||||
|
||||
options['units']['specific_capacity'] = r'{} {}'.format(options['units']['capacity'], options['units']['mass']) + '$^{-1}$'
|
||||
|
||||
|
||||
|
||||
def get_old_units(df, kind):
|
||||
def get_old_units(df: pd.DataFrame, options: dict) -> dict:
|
||||
''' Reads a DataFrame with cycling data and determines which units have been used and returns these in a dictionary'''
|
||||
|
||||
if options['kind'] == 'batsmall':
|
||||
|
||||
if kind=='batsmall':
|
||||
time = df.columns[0].split()[-1].strip('[]')
|
||||
voltage = df.columns[1].split()[-1].strip('[]')
|
||||
current = df.columns[2].split()[-1].strip('[]')
|
||||
capacity, mass = df.columns[4].split()[-1].strip('[]').split('/')
|
||||
old_units = {'time': time, 'current': current, 'voltage': voltage, 'capacity': capacity, 'mass': mass}
|
||||
|
||||
if kind=='neware':
|
||||
if options['kind']=='neware':
|
||||
|
||||
for column in df.columns:
|
||||
if 'Voltage' in column:
|
||||
|
|
@ -514,7 +499,7 @@ def get_old_units(df, kind):
|
|||
old_units = {'voltage': voltage, 'current': current, 'capacity': capacity, 'energy': energy}
|
||||
|
||||
|
||||
if kind=='biologic':
|
||||
if options['kind'] == 'biologic':
|
||||
|
||||
for column in df.columns:
|
||||
if 'time' in column:
|
||||
|
|
@ -530,8 +515,6 @@ def get_old_units(df, kind):
|
|||
|
||||
old_units = {'voltage': voltage, 'current': current, 'capacity': capacity, 'energy': energy, 'time': time}
|
||||
|
||||
|
||||
|
||||
return old_units
|
||||
|
||||
def convert_time_string(time_string, unit='ms'):
|
||||
|
|
|
|||
|
|
@ -5,59 +5,120 @@ import pandas as pd
|
|||
import numpy as np
|
||||
import math
|
||||
|
||||
import ipywidgets as widgets
|
||||
from IPython.display import display
|
||||
|
||||
import nafuma.electrochemistry as ec
|
||||
import nafuma.plotting as btp
|
||||
import nafuma.auxillary as aux
|
||||
|
||||
|
||||
def plot_gc(path, kind, options=None):
|
||||
|
||||
# Prepare plot, and read and process data
|
||||
fig, ax = prepare_gc_plot(options=options)
|
||||
cycles = ec.io.read_data(path=path, kind=kind, options=options)
|
||||
|
||||
def plot_gc(data, options=None):
|
||||
|
||||
|
||||
# Update options
|
||||
required_options = ['x_vals', 'y_vals', 'which_cycles', 'chg', 'dchg', 'colours', 'differentiate_charge_discharge', 'gradient']
|
||||
default_options = {'x_vals': 'capacity', 'y_vals': 'voltage', 'which_cycles': 'all', 'chg': True, 'dchg': True, 'colours': None, 'differentiate_charge_discharge': True, 'gradient': False}
|
||||
required_options = ['x_vals', 'y_vals', 'which_cycles', 'charge', 'discharge', 'colours', 'differentiate_charge_discharge', 'gradient', 'interactive', 'interactive_session_active', 'rc_params', 'format_params']
|
||||
default_options = {
|
||||
'x_vals': 'capacity', 'y_vals': 'voltage',
|
||||
'which_cycles': 'all',
|
||||
'charge': True, 'discharge': True,
|
||||
'colours': None,
|
||||
'differentiate_charge_discharge': True,
|
||||
'gradient': False,
|
||||
'interactive': False,
|
||||
'interactive_session_active': False,
|
||||
'rc_params': {},
|
||||
'format_params': {}}
|
||||
|
||||
options = update_options(options=options, required_options=required_options, default_options=default_options)
|
||||
options = aux.update_options(options=options, required_options=required_options, default_options=default_options)
|
||||
|
||||
|
||||
|
||||
|
||||
if not 'cycles' in data.keys():
|
||||
data['cycles'] = ec.io.read_data(data=data, options=options)
|
||||
|
||||
# Update list of cycles to correct indices
|
||||
update_cycles_list(cycles=cycles, options=options)
|
||||
update_cycles_list(cycles=data['cycles'], options=options)
|
||||
|
||||
colours = generate_colours(cycles=cycles, options=options)
|
||||
colours = generate_colours(cycles=data['cycles'], options=options)
|
||||
|
||||
if options['interactive']:
|
||||
options['interactive'], options['interactive_session_active'] = False, True
|
||||
plot_gc_interactive(data=data, options=options)
|
||||
return
|
||||
|
||||
|
||||
for i, cycle in enumerate(cycles):
|
||||
# Prepare plot, and read and process data
|
||||
|
||||
fig, ax = btp.prepare_plot(options=options)
|
||||
|
||||
for i, cycle in enumerate(data['cycles']):
|
||||
if i in options['which_cycles']:
|
||||
if options['chg']:
|
||||
if options['charge']:
|
||||
cycle[0].plot(x=options['x_vals'], y=options['y_vals'], ax=ax, c=colours[i][0])
|
||||
|
||||
if options['dchg']:
|
||||
if options['discharge']:
|
||||
cycle[1].plot(x=options['x_vals'], y=options['y_vals'], ax=ax, c=colours[i][1])
|
||||
|
||||
|
||||
|
||||
fig, ax = prettify_gc_plot(fig=fig, ax=ax, options=options)
|
||||
|
||||
return cycles, fig, ax
|
||||
|
||||
|
||||
def update_options(options, required_options, default_options):
|
||||
|
||||
if not options:
|
||||
options = default_options
|
||||
|
||||
if options['interactive_session_active']:
|
||||
update_labels(options, force=True)
|
||||
else:
|
||||
for option in required_options:
|
||||
if option not in options.keys():
|
||||
options[option] = default_options[option]
|
||||
update_labels(options)
|
||||
|
||||
return options
|
||||
fig, ax = btp.adjust_plot(fig=fig, ax=ax, options=options)
|
||||
|
||||
def update_cycles_list(cycles, options):
|
||||
#if options['interactive_session_active']:
|
||||
|
||||
if not options:
|
||||
options['which_cycles']
|
||||
|
||||
return data['cycles'], fig, ax
|
||||
|
||||
|
||||
def plot_gc_interactive(data, options):
|
||||
|
||||
w = widgets.interactive(btp.ipywidgets_update, func=widgets.fixed(plot_gc), data=widgets.fixed(data), options=widgets.fixed(options),
|
||||
charge=widgets.ToggleButton(value=True),
|
||||
discharge=widgets.ToggleButton(value=True),
|
||||
x_vals=widgets.Dropdown(options=['specific_capacity', 'capacity', 'ions', 'voltage', 'time', 'energy'], value='specific_capacity', description='X-values')
|
||||
)
|
||||
|
||||
options['widget'] = w
|
||||
|
||||
display(w)
|
||||
|
||||
|
||||
def update_labels(options, force=False):
|
||||
|
||||
if 'xlabel' not in options.keys() or force:
|
||||
options['xlabel'] = options['x_vals'].capitalize().replace('_', ' ')
|
||||
|
||||
if 'ylabel' not in options.keys() or force:
|
||||
options['ylabel'] = options['y_vals'].capitalize().replace('_', ' ')
|
||||
|
||||
|
||||
if 'xunit' not in options.keys() or force:
|
||||
if options['x_vals'] == 'capacity':
|
||||
options['xunit'] = options['units']['capacity']
|
||||
elif options['x_vals'] == 'specific_capacity':
|
||||
options['xunit'] = f"{options['units']['capacity']} {options['units']['mass']}$^{{-1}}$"
|
||||
elif options['x_vals'] == 'time':
|
||||
options['xunit'] = options['units']['time']
|
||||
elif options['x_vals'] == 'ions':
|
||||
options['xunit'] = None
|
||||
|
||||
|
||||
if 'yunit' not in options.keys() or force:
|
||||
if options['y_vals'] == 'voltage':
|
||||
options['yunit'] = options['units']['voltage']
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def update_cycles_list(cycles, options: dict) -> None:
|
||||
|
||||
if options['which_cycles'] == 'all':
|
||||
options['which_cycles'] = [i for i in range(len(cycles))]
|
||||
|
|
@ -81,52 +142,6 @@ def update_cycles_list(cycles, options):
|
|||
options['which_cycles'] = [i-1 for i in range(which_cycles[0], which_cycles[1]+1)]
|
||||
|
||||
|
||||
return options
|
||||
|
||||
|
||||
def prepare_gc_plot(options=None):
|
||||
|
||||
|
||||
# First take care of the options for plotting - set any values not specified to the default values
|
||||
required_options = ['columns', 'width', 'height', 'format', 'dpi', 'facecolor']
|
||||
default_options = {'columns': 1, 'width': 14, 'format': 'golden_ratio', 'dpi': None, 'facecolor': 'w'}
|
||||
|
||||
# If none are set at all, just pass the default_options
|
||||
if not options:
|
||||
options = default_options
|
||||
options['height'] = options['width'] * (math.sqrt(5) - 1) / 2
|
||||
options['figsize'] = (options['width'], options['height'])
|
||||
|
||||
|
||||
# If options is passed, go through to fill out the rest.
|
||||
else:
|
||||
# Start by setting the width:
|
||||
if 'width' not in options.keys():
|
||||
options['width'] = default_options['width']
|
||||
|
||||
# Then set height - check options for format. If not given, set the height to the width scaled by the golden ratio - if the format is square, set the same. This should possibly allow for the tweaking of custom ratios later.
|
||||
if 'height' not in options.keys():
|
||||
if 'format' not in options.keys():
|
||||
options['height'] = options['width'] * (math.sqrt(5) - 1) / 2
|
||||
elif options['format'] == 'square':
|
||||
options['height'] = options['width']
|
||||
|
||||
options['figsize'] = (options['width'], options['height'])
|
||||
|
||||
# After height and width are set, go through the rest of the options to make sure that all the required options are filled
|
||||
for option in required_options:
|
||||
if option not in options.keys():
|
||||
options[option] = default_options[option]
|
||||
|
||||
fig, ax = plt.subplots(figsize=(options['figsize']), dpi=options['dpi'], facecolor=options['facecolor'])
|
||||
|
||||
linewidth = 1*options['columns']
|
||||
axeswidth = 3*options['columns']
|
||||
|
||||
plt.rc('lines', linewidth=linewidth)
|
||||
plt.rc('axes', linewidth=axeswidth)
|
||||
|
||||
return fig, ax
|
||||
|
||||
|
||||
def prettify_gc_plot(fig, ax, options=None):
|
||||
|
|
@ -161,12 +176,12 @@ def prettify_gc_plot(fig, ax, options=None):
|
|||
'positions': {'xaxis': 'bottom', 'yaxis': 'left'},
|
||||
'x_vals': 'specific_capacity', 'y_vals': 'voltage',
|
||||
'xlabel': None, 'ylabel': None,
|
||||
'units': None,
|
||||
'units': {'capacity': 'mAh', 'specific_capacity': r'mAh g$^{-1}$', 'time': 's', 'current': 'mA', 'energy': 'mWh', 'mass': 'g', 'voltage': 'V'},
|
||||
'sizes': None,
|
||||
'title': None
|
||||
}
|
||||
|
||||
update_options(options, required_options, default_options)
|
||||
aux.update_options(options, required_options, default_options)
|
||||
|
||||
|
||||
##################################################################
|
||||
|
|
|
|||
|
|
@ -135,7 +135,10 @@ def adjust_plot(fig, ax, options):
|
|||
ax.set_ylabel('')
|
||||
|
||||
if not options['hide_x_labels']:
|
||||
if not options['xunit']:
|
||||
ax.set_xlabel(f'{options["xlabel"]}')
|
||||
else:
|
||||
ax.set_xlabel(f'{options["xlabel"]} [{options["xunit"]}]')
|
||||
else:
|
||||
ax.set_xlabel('')
|
||||
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ import numpy as np
|
|||
import os
|
||||
import matplotlib.pyplot as plt
|
||||
import nafuma.auxillary as aux
|
||||
|
||||
import nafuma.plotting as btp
|
||||
import nafuma.xanes as xas
|
||||
import nafuma.xanes.io as io
|
||||
|
|
@ -14,6 +15,7 @@ import ipywidgets as widgets
|
|||
from IPython.display import display
|
||||
|
||||
|
||||
|
||||
##Better to make a new function that loops through the files, and performing the split_xanes_scan on
|
||||
|
||||
#Trying to make a function that can decide which edge it is based on the first ZapEnergy-value
|
||||
|
|
@ -249,7 +251,6 @@ def pre_edge_subtraction(data: dict, options={}):
|
|||
|
||||
|
||||
|
||||
|
||||
def post_edge_fit(data: dict, options={}):
|
||||
''' Fit the post edge within the post_edge.limits to a polynomial of post_edge.polyorder order. Allows interactive plotting, as well as showing static plots and saving plots to drive.
|
||||
|
||||
|
|
@ -258,6 +259,7 @@ def post_edge_fit(data: dict, options={}):
|
|||
|
||||
|
||||
required_options = ['log', 'logfile', 'post_edge_masks', 'post_edge_limits', 'post_edge_polyorder', 'post_edge_store_data', 'interactive', 'interactive_session_active', 'show_plots', 'save_plots', 'save_folder']
|
||||
|
||||
default_options = {
|
||||
'log': False,
|
||||
'logfile': f'{datetime.now().strftime("%Y-%m-%d-%H-%M-%S")}_post_edge_fit.log',
|
||||
|
|
@ -339,6 +341,7 @@ def post_edge_fit(data: dict, options={}):
|
|||
#adding a new column in df_background with the y-values of the background
|
||||
post_edge_fit_data.insert(1,filename,background)
|
||||
|
||||
|
||||
if options['save_plots'] or options['show_plots']:
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -6,12 +6,10 @@ import nafuma.auxillary as aux
|
|||
from nafuma.xanes.calib import find_element
|
||||
import datetime
|
||||
|
||||
|
||||
def split_scan_data(data: dict, options={}) -> list:
|
||||
''' Splits a XANES-file from BM31 into different files depending on the edge. Has the option to add intensities of all scans of same edge into the same file.
|
||||
As of now only picks out xmap_rois (fluoresence mode) and for Mn, Fe, Co and Ni K-edges.'''
|
||||
|
||||
|
||||
required_options = ['log', 'logfile', 'save', 'save_folder', 'replace', 'active_roi', 'add_rois', 'return']
|
||||
|
||||
default_options = {
|
||||
|
|
|
|||
|
|
@ -40,12 +40,13 @@ def integrate_1d(data, options={}, index=0):
|
|||
df: DataFrame contianing 1D diffractogram if option 'return' is True
|
||||
'''
|
||||
|
||||
required_options = ['unit', 'nbins', 'save', 'save_filename', 'save_extension', 'save_folder', 'overwrite', 'extract_folder']
|
||||
required_options = ['unit', 'npt', 'save', 'save_filename', 'save_extension', 'save_folder', 'overwrite', 'extract_folder', 'error_model']
|
||||
|
||||
default_options = {
|
||||
'unit': '2th_deg',
|
||||
'nbins': 3000,
|
||||
'npt': 3000,
|
||||
'extract_folder': 'tmp',
|
||||
'error_model': None,
|
||||
'save': False,
|
||||
'save_filename': None,
|
||||
'save_extension': '_integrated.xy',
|
||||
|
|
@ -59,9 +60,17 @@ def integrate_1d(data, options={}, index=0):
|
|||
|
||||
|
||||
# Get image array from filename if not passed
|
||||
if 'image' not in data.keys():
|
||||
if 'image' not in data.keys() or not data['image']:
|
||||
data['image'] = get_image_array(data['path'][index])
|
||||
|
||||
|
||||
# Load mask
|
||||
if 'mask' in data.keys():
|
||||
mask = get_image_array(data['mask'])
|
||||
else:
|
||||
mask = None
|
||||
|
||||
|
||||
# Instanciate the azimuthal integrator from pyFAI from the calibrant (.poni-file)
|
||||
ai = pyFAI.load(data['calibrant'])
|
||||
|
||||
|
|
@ -72,11 +81,17 @@ def integrate_1d(data, options={}, index=0):
|
|||
if not os.path.isdir(options['extract_folder']):
|
||||
os.makedirs(options['extract_folder'])
|
||||
|
||||
if not os.path.isdir(options['save_folder']):
|
||||
os.makedirs(options['save_folder'])
|
||||
|
||||
res = ai.integrate1d(data['image'], options['nbins'], unit=options['unit'], filename=filename)
|
||||
|
||||
|
||||
|
||||
res = ai.integrate1d(data['image'], npt=options['npt'], mask=mask, error_model=options['error_model'], unit=options['unit'], filename=filename)
|
||||
|
||||
data['path'][index] = filename
|
||||
diffractogram, wavelength = read_xy(data=data, options=options, index=index)
|
||||
diffractogram, _ = read_xy(data=data, options=options, index=index)
|
||||
wavelength = find_wavelength_from_poni(path=data['calibrant'])
|
||||
|
||||
if not options['save']:
|
||||
os.remove(filename)
|
||||
|
|
@ -278,8 +293,12 @@ def read_brml(data, options={}, index=0):
|
|||
|
||||
#if 'wavelength' not in data.keys():
|
||||
# Find wavelength
|
||||
|
||||
if not data['wavelength'][index]:
|
||||
for chain in root.findall('./FixedInformation/Instrument/PrimaryTracks/TrackInfoData/MountedOptics/InfoData/Tube/WaveLengthAlpha1'):
|
||||
wavelength = float(chain.attrib['Value'])
|
||||
else:
|
||||
wavelength = data['wavelength'][index]
|
||||
|
||||
|
||||
diffractogram = pd.DataFrame(diffractogram)
|
||||
|
|
@ -302,7 +321,11 @@ def read_xy(data, options={}, index=0):
|
|||
|
||||
#if 'wavelength' not in data.keys():
|
||||
# Get wavelength from scan
|
||||
|
||||
if not 'wavelength' in data.keys() or data['wavelength'][index]:
|
||||
wavelength = find_wavelength_from_xy(path=data['path'][index])
|
||||
else:
|
||||
wavelength = data['wavelength'][index]
|
||||
|
||||
with open(data['path'][index], 'r') as f:
|
||||
position = 0
|
||||
|
|
@ -326,6 +349,38 @@ def read_xy(data, options={}, index=0):
|
|||
return diffractogram, wavelength
|
||||
|
||||
|
||||
|
||||
|
||||
def strip_headers_from_xy(path: str, filename=None) -> None:
|
||||
''' Strips headers from a .xy-file'''
|
||||
|
||||
|
||||
xy = []
|
||||
with open(path, 'r') as f:
|
||||
lines = f.readlines()
|
||||
|
||||
headerlines = 0
|
||||
for line in lines:
|
||||
if line[0] == '#':
|
||||
headerlines += 1
|
||||
else:
|
||||
xy.append(line)
|
||||
|
||||
|
||||
if not filename:
|
||||
ext = path.split('.')[-1]
|
||||
filename = path.split(f'.{ext}')[0] + f'_noheaders.{ext}'
|
||||
|
||||
|
||||
with open(filename, 'w') as f:
|
||||
for line in xy:
|
||||
f.write(line)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def read_data(data, options={}, index=0):
|
||||
|
||||
beamline_extensions = ['mar3450', 'edf', 'cbf']
|
||||
|
|
@ -351,6 +406,7 @@ def read_data(data, options={}, index=0):
|
|||
diffractogram = apply_offset(diffractogram, wavelength, index, options)
|
||||
|
||||
|
||||
|
||||
diffractogram = translate_wavelengths(data=diffractogram, wavelength=wavelength)
|
||||
|
||||
return diffractogram, wavelength
|
||||
|
|
@ -470,7 +526,7 @@ def translate_wavelengths(data: pd.DataFrame, wavelength: float, to_wavelength=N
|
|||
|
||||
data['2th_cuka'] = np.NAN
|
||||
|
||||
data['2th_cuka'].loc[data['2th'] <= max_2th_cuka] = 2*np.arcsin(cuka/wavelength * np.sin((data['2th']/2) * np.pi/180)) * 180/np.pi
|
||||
data['2th_cuka'].loc[data['2th'] <= max_2th_cuka] = 2*np.arcsin(cuka/wavelength * np.sin((data['2th'].loc[data['2th'] <= max_2th_cuka]/2) * np.pi/180)) * 180/np.pi
|
||||
|
||||
# Translate to MoKalpha
|
||||
moka = 0.71073 # Å
|
||||
|
|
@ -482,7 +538,7 @@ def translate_wavelengths(data: pd.DataFrame, wavelength: float, to_wavelength=N
|
|||
|
||||
data['2th_moka'] = np.NAN
|
||||
|
||||
data['2th_moka'].loc[data['2th'] <= max_2th_moka] = 2*np.arcsin(moka/wavelength * np.sin((data['2th']/2) * np.pi/180)) * 180/np.pi
|
||||
data['2th_moka'].loc[data['2th'] <= max_2th_moka] = 2*np.arcsin(moka/wavelength * np.sin((data['2th'].loc[data['2th'] <= max_2th_moka]/2) * np.pi/180)) * 180/np.pi
|
||||
|
||||
|
||||
# Convert to other parameters
|
||||
|
|
@ -501,7 +557,7 @@ def translate_wavelengths(data: pd.DataFrame, wavelength: float, to_wavelength=N
|
|||
|
||||
|
||||
data['2th'] = np.NAN
|
||||
data['2th'].loc[data['2th_cuka'] <= max_2th] = 2*np.arcsin(to_wavelength/cuka * np.sin((data['2th_cuka']/2) * np.pi/180)) * 180/np.pi
|
||||
data['2th'].loc[data['2th_cuka'] <= max_2th] = 2*np.arcsin(to_wavelength/cuka * np.sin((data['2th_cuka'].loc[data['2th_cuka'] <= max_2th]/2) * np.pi/180)) * 180/np.pi
|
||||
|
||||
|
||||
|
||||
|
|
@ -528,6 +584,22 @@ def find_wavelength_from_xy(path):
|
|||
elif 'Wavelength' in line:
|
||||
wavelength = float(line.split()[2])*10**10
|
||||
|
||||
else:
|
||||
wavelength = None
|
||||
|
||||
|
||||
|
||||
return wavelength
|
||||
|
||||
|
||||
def find_wavelength_from_poni(path):
|
||||
|
||||
with open(path, 'r') as f:
|
||||
lines = f.readlines()
|
||||
|
||||
for line in lines:
|
||||
if 'Wavelength' in line:
|
||||
wavelength = float(line.split()[-1])*10**10
|
||||
|
||||
|
||||
return wavelength
|
||||
|
|
|
|||
|
|
@ -13,7 +13,6 @@ import nafuma.xrd as xrd
|
|||
import nafuma.auxillary as aux
|
||||
import nafuma.plotting as btp
|
||||
|
||||
|
||||
def plot_diffractogram(data, options={}):
|
||||
''' Plots a diffractogram.
|
||||
|
||||
|
|
@ -67,7 +66,14 @@ def plot_diffractogram(data, options={}):
|
|||
if not 'diffractogram' in data.keys():
|
||||
# Initialise empty list for diffractograms and wavelengths
|
||||
data['diffractogram'] = [None for _ in range(len(data['path']))]
|
||||
|
||||
# If wavelength is not manually passed it should be automatically gathered from the .xy-file
|
||||
if 'wavelength' not in data.keys():
|
||||
data['wavelength'] = [None for _ in range(len(data['path']))]
|
||||
else:
|
||||
# If only a single value is passed it should be set to be the same for all diffractograms passed
|
||||
if not isinstance(data['wavelength'], list):
|
||||
data['wavelength'] = [data['wavelength'] for _ in range(len(data['path']))]
|
||||
|
||||
for index in range(len(data['path'])):
|
||||
diffractogram, wavelength = xrd.io.read_data(data=data, options=options, index=index)
|
||||
|
|
@ -75,6 +81,9 @@ def plot_diffractogram(data, options={}):
|
|||
data['diffractogram'][index] = diffractogram
|
||||
data['wavelength'][index] = wavelength
|
||||
|
||||
# FIXME This is a quick fix as the image is not reloaded when passing multiple beamline datasets
|
||||
data['image'] = None
|
||||
|
||||
# Sets the xlim if this has not bee specified
|
||||
if not options['xlim']:
|
||||
options['xlim'] = [data['diffractogram'][0][options['x_vals']].min(), data['diffractogram'][0][options['x_vals']].max()]
|
||||
|
|
@ -114,7 +123,7 @@ def plot_diffractogram(data, options={}):
|
|||
options['reflections_data'] = [options['reflections_data']]
|
||||
|
||||
# Determine number of subplots and height ratios between them
|
||||
if len(options['reflections_data']) >= 1:
|
||||
if options['reflections_data'] and len(options['reflections_data']) >= 1:
|
||||
options = determine_grid_layout(options=options)
|
||||
|
||||
|
||||
|
|
@ -331,10 +340,10 @@ def plot_diffractogram_interactive(data, options):
|
|||
'heatmap_default': {'min': xminmax['heatmap'][0], 'max': xminmax['heatmap'][1], 'value': [xminmax['heatmap'][0], xminmax['heatmap'][1]], 'step': 10}
|
||||
},
|
||||
'ylim': {
|
||||
'w': widgets.FloatRangeSlider(value=[yminmax['start'][2], yminmax['start'][3]], min=yminmax['start'][0], max=yminmax['start'][1], step=0.5, layout=widgets.Layout(width='95%')),
|
||||
'w': widgets.FloatRangeSlider(value=[yminmax['start'][2], yminmax['start'][3]], min=yminmax['start'][0], max=yminmax['start'][1], step=0.01, layout=widgets.Layout(width='95%')),
|
||||
'state': 'heatmap' if options['heatmap'] else 'diff',
|
||||
'diff_default': {'min': yminmax['diff'][0], 'max': yminmax['diff'][1], 'value': [yminmax['diff'][2], yminmax['diff'][3]], 'step': 0.1},
|
||||
'heatmap_default': {'min': yminmax['heatmap'][0], 'max': yminmax['heatmap'][1], 'value': [yminmax['heatmap'][0], yminmax['heatmap'][1]], 'step': 0.1}
|
||||
'diff_default': {'min': yminmax['diff'][0], 'max': yminmax['diff'][1], 'value': [yminmax['diff'][2], yminmax['diff'][3]], 'step': 0.01},
|
||||
'heatmap_default': {'min': yminmax['heatmap'][0], 'max': yminmax['heatmap'][1], 'value': [yminmax['heatmap'][0], yminmax['heatmap'][1]], 'step': 0.01}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -356,7 +365,12 @@ def plot_diffractogram_interactive(data, options):
|
|||
w = widgets.interactive(btp.ipywidgets_update, func=widgets.fixed(plot_diffractogram), data=widgets.fixed(data), options=widgets.fixed(options),
|
||||
scatter=widgets.ToggleButton(value=False),
|
||||
line=widgets.ToggleButton(value=True),
|
||||
xlim=options['widgets']['xlim']['w'])
|
||||
heatmap=widgets.ToggleButton(value=options['heatmap']),
|
||||
x_vals=widgets.Dropdown(options=['2th', 'd', '1/d', 'q', 'q2', 'q4', '2th_cuka', '2th_moka'], value='2th', description='X-values'),
|
||||
xlim=options['widgets']['xlim']['w'],
|
||||
ylim=options['widgets']['ylim']['w'],
|
||||
offset_y=widgets.BoundedFloatText(value=options['offset_y'], min=-5, max=5, step=0.01, description='offset_y'),
|
||||
offset_x=widgets.BoundedFloatText(value=options['offset_x'], min=-1, max=1, step=0.01, description='offset_x'))
|
||||
|
||||
|
||||
options['widget'] = w
|
||||
|
|
|
|||
1
test.txt
1
test.txt
|
|
@ -1 +0,0 @@
|
|||
hei på dej
|
||||
Loading…
Add table
Add a link
Reference in a new issue