pymagicc.io¶
- class pymagicc.io.MAGICCData(data, columns=None, **kwargs)[source]¶
Bases:
scmdata.run.ScmRun
An interface to read and write the input files used by MAGICC.
MAGICCData can read input files from both MAGICC6 and MAGICC7. It returns files in a common format with a common vocabulary to simplify the process of reading, writing and handling MAGICC data. For more information on file conventions, see MAGICC file conventions.
See
notebooks/Input-Examples.ipynb
for usage examples.- data¶
A pandas dataframe with the data.
- Type
pd.DataFrame
- metadata¶
Metadata for the data in
self.df
.- Type
dict
- filepath¶
The file the data was loaded from. None if data was not loaded from a file.
- Type
str
- add(other, op_cols, **kwargs)¶
Add values
- Parameters
other (
ScmRun
) –ScmRun
containing data to addop_cols (dict of str: str) – Dictionary containing the columns to drop before adding as the keys and the value those columns should hold in the output as the values. For example, if we have
op_cols={"variable": "Emissions|CO2 - Emissions|CO2|Fossil"}
then the addition will be performed with an index that uses all columns except the “variable” column and the output will have a “variable” column with the value “Emissions|CO2 - Emissions|CO2|Fossil”.**kwargs (any) – Passed to
prep_for_op()
- Returns
Sum of
self
andother
, usingop_cols
to define the columns which should be dropped before the data is aligned and to define the value of these columns in the output.- Return type
ScmRun
Examples
>>> import numpy as np >>> from scmdata import ScmRun >>> >>> IDX = [2010, 2020, 2030] >>> >>> >>> start = ScmRun( ... data=np.arange(18).reshape(3, 6), ... index=IDX, ... columns={ ... "variable": [ ... "Emissions|CO2|Fossil", ... "Emissions|CO2|AFOLU", ... "Emissions|CO2|Fossil", ... "Emissions|CO2|AFOLU", ... "Cumulative Emissions|CO2", ... "Surface Air Temperature Change", ... ], ... "unit": ["GtC / yr", "GtC / yr", "GtC / yr", "GtC / yr", "GtC", "K"], ... "region": ["World|NH", "World|NH", "World|SH", "World|SH", "World", "World"], ... "model": "idealised", ... "scenario": "idealised", ... }, ... ) >>> >>> start.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 variable unit region model scenario Emissions|CO2|Fossil GtC / yr World|NH idealised idealised 0.0 6.0 12.0 Emissions|CO2|AFOLU GtC / yr World|NH idealised idealised 1.0 7.0 13.0 Emissions|CO2|Fossil GtC / yr World|SH idealised idealised 2.0 8.0 14.0 Emissions|CO2|AFOLU GtC / yr World|SH idealised idealised 3.0 9.0 15.0 Cumulative Emissions|CO2 GtC World idealised idealised 4.0 10.0 16.0 >>> fos = start.filter(variable="*Fossil") >>> fos.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 variable unit region model scenario Emissions|CO2|Fossil GtC / yr World|NH idealised idealised 0.0 6.0 12.0 World|SH idealised idealised 2.0 8.0 14.0 >>> >>> afolu = start.filter(variable="*AFOLU") >>> afolu.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 variable unit region model scenario Emissions|CO2|AFOLU GtC / yr World|NH idealised idealised 1.0 7.0 13.0 World|SH idealised idealised 3.0 9.0 15.0 >>> >>> total = fos.add(afolu, op_cols={"variable": "Emissions|CO2"}) >>> total.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 model scenario region variable unit idealised idealised World|NH Emissions|CO2 gigatC / a 1.0 13.0 25.0 World|SH Emissions|CO2 gigatC / a 5.0 17.0 29.0 >>> >>> nh = start.filter(region="*NH") >>> nh.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 variable unit region model scenario Emissions|CO2|Fossil GtC / yr World|NH idealised idealised 0.0 6.0 12.0 Emissions|CO2|AFOLU GtC / yr World|NH idealised idealised 1.0 7.0 13.0 >>> >>> sh = start.filter(region="*SH") >>> sh.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 variable unit region model scenario Emissions|CO2|Fossil GtC / yr World|SH idealised idealised 2.0 8.0 14.0 Emissions|CO2|AFOLU GtC / yr World|SH idealised idealised 3.0 9.0 15.0 >>> >>> world = nh.add(sh, op_cols={"region": "World"}) >>> world.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 model scenario region variable unit idealised idealised World Emissions|CO2|Fossil gigatC / a 2.0 14.0 26.0 Emissions|CO2|AFOLU gigatC / a 4.0 16.0 28.0
- append(other, inplace=False, duplicate_msg=True, metadata=None, **kwargs)¶
Append additional data to the current dataframe.
For details, see
run_append()
.- Parameters
other – Data (in format which can be cast to
ScmRun
) to appendinplace (
bool
) – IfTrue
, append data in place and returnNone
. Otherwise, return a newScmRun
instance with the appended data.duplicate_msg (
Union
[str
,bool
]) – IfTrue
, raise ascmdata.errors.NonUniqueMetadataError
error so the user can see the duplicate timeseries. IfFalse
, take the average and do not raise a warning or error. If"warn"
, raise a warning if duplicate data is detected.metadata (
Optional
[Dict
[str
,Union
[str
,int
,float
]]]) – If notNone
, override the metadata of the resultingScmRun
withmetadata
. Otherwise, the metadata for the runs are merged. In the case where there are duplicate metadata keys, the values from the first run are used.**kwargs – Keywords to pass to
ScmRun.__init__()
when readingother
- Returns
If not
inplace
, return a newScmRun
instance containing the result of the append.- Return type
ScmRun
- Raises
NonUniqueMetadataError – If the appending results in timeseries with duplicate metadata and
duplicate_msg
isTrue
- convert_unit(unit, context=None, inplace=False, **kwargs)¶
Convert the units of a selection of timeseries.
Uses
scmdata.units.UnitConverter
to perform the conversion.- Parameters
unit (
str
) – Unit to convert to. This must be recognised byUnitConverter
.context (
Optional
[str
]) – Context to use for the conversion i.e. which metric to apply when performing CO2-equivalent calculations. IfNone
, no metric will be applied and CO2-equivalent calculations will raiseDimensionalityError
.inplace (
bool
) – If True, apply the conversion inplace and return None**kwargs – Extra arguments which are passed to
filter()
to limit the timeseries which are attempted to be converted. Defaults to selecting the entire ScmRun, which will likely fail.
- Returns
If
inplace
is notFalse
, a newScmRun
instance with the converted units.- Return type
ScmRun
Notes
If
context
is notNone
, then the context used for the conversion will be checked against any existing metadata and, if the conversion is valid, stored in the output’s metadata.- Raises
ValueError –
"unit_context"
is already included inself
’smeta_attributes()
and it does not matchcontext
for the variables to be converted.
- copy()¶
Return a
copy.deepcopy()
of self.Also creates copies the underlying Timeseries data
- Returns
copy.deepcopy()
ofself
- Return type
ScmRun
- data_hierarchy_separator = '|'¶
String used to define different levels in our data hierarchies.
By default we follow pyam and use “|”. In such a case, emissions of CO2 for energy from coal would be “Emissions|CO2|Energy|Coal”.
- Type
str
- delta_per_delta_time(out_var=None)¶
Calculate change in timeseries values for each timestep, divided by the size of the timestep
The output is placed on the middle of each timestep and is one timestep shorter than the input.
- Parameters
out_var (str) – If provided, the variable column of the output is set equal to
out_var
. Otherwise, the output variables are equal to the input variables, prefixed with “Delta ” .- Returns
scmdata.ScmRun
containing the changes in values ofself
, normalised by the change in time- Return type
scmdata.ScmRun
- Warns
UserWarning – The data contains nans. If this happens, the output data will also contain nans.
- divide(other, op_cols, **kwargs)¶
Divide values (self / other)
- Parameters
other (
ScmRun
) –ScmRun
containing data to divideop_cols (dict of str: str) – Dictionary containing the columns to drop before dividing as the keys and the value those columns should hold in the output as the values. For example, if we have
op_cols={"variable": "Emissions|CO2 - Emissions|CO2|Fossil"}
then the division will be performed with an index that uses all columns except the “variable” column and the output will have a “variable” column with the value “Emissions|CO2 - Emissions|CO2|Fossil”.**kwargs (any) – Passed to
prep_for_op()
- Returns
Quotient of
self
andother
, usingop_cols
to define the columns which should be dropped before the data is aligned and to define the value of these columns in the output.- Return type
ScmRun
Examples
>>> import numpy as np >>> from scmdata import ScmRun >>> >>> IDX = [2010, 2020, 2030] >>> >>> >>> start = ScmRun( ... data=np.arange(18).reshape(3, 6), ... index=IDX, ... columns={ ... "variable": [ ... "Emissions|CO2|Fossil", ... "Emissions|CO2|AFOLU", ... "Emissions|CO2|Fossil", ... "Emissions|CO2|AFOLU", ... "Cumulative Emissions|CO2", ... "Surface Air Temperature Change", ... ], ... "unit": ["GtC / yr", "GtC / yr", "GtC / yr", "GtC / yr", "GtC", "K"], ... "region": ["World|NH", "World|NH", "World|SH", "World|SH", "World", "World"], ... "model": "idealised", ... "scenario": "idealised", ... }, ... ) >>> >>> start.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 variable unit region model scenario Emissions|CO2|Fossil GtC / yr World|NH idealised idealised 0.0 6.0 12.0 Emissions|CO2|AFOLU GtC / yr World|NH idealised idealised 1.0 7.0 13.0 Emissions|CO2|Fossil GtC / yr World|SH idealised idealised 2.0 8.0 14.0 Emissions|CO2|AFOLU GtC / yr World|SH idealised idealised 3.0 9.0 15.0 Cumulative Emissions|CO2 GtC World idealised idealised 4.0 10.0 16.0 >>> fos = start.filter(variable="*Fossil") >>> fos.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 variable unit region model scenario Emissions|CO2|Fossil GtC / yr World|NH idealised idealised 0.0 6.0 12.0 World|SH idealised idealised 2.0 8.0 14.0 >>> >>> afolu = start.filter(variable="*AFOLU") >>> afolu.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 variable unit region model scenario Emissions|CO2|AFOLU GtC / yr World|NH idealised idealised 1.0 7.0 13.0 World|SH idealised idealised 3.0 9.0 15.0 >>> >>> fos_afolu_ratio = fos.divide( ... afolu, op_cols={"variable": "Emissions|CO2|Fossil : AFOLU"} ... ) >>> fos_afolu_ratio.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 model scenario region variable unit idealised idealised World|NH Emissions|CO2|Fossil : AFOLU dimensionless 0.000000 0.857143 0.923077 World|SH Emissions|CO2|Fossil : AFOLU dimensionless 0.666667 0.888889 0.933333
- drop_meta(columns, inplace=False)¶
Drop meta columns out of the Run
- Parameters
columns (
Union
[list
,str
]) – The column or columns to dropinplace (
Optional
[bool
]) – If True, do operation inplace and return None.
- Raises
KeyError – If any of the columns do not exist in the meta
DataFrame
- property empty: bool¶
Indicate whether
ScmRun
is empty i.e. contains no data- Returns
If
ScmRun
is empty, returnTrue
, if not returnFalse
- Return type
bool
- filter(keep=True, inplace=False, log_if_empty=True, **kwargs)¶
Return a filtered ScmRun (i.e., a subset of the data).
>>> df <scmdata.ScmRun (timeseries: 3, timepoints: 3)> Time: Start: 2005-01-01T00:00:00 End: 2015-01-01T00:00:00 Meta: model scenario region variable unit climate_model 0 a_iam a_scenario World Primary Energy EJ/yr a_model 1 a_iam a_scenario World Primary Energy|Coal EJ/yr a_model 2 a_iam a_scenario2 World Primary Energy EJ/yr a_model [3 rows x 7 columns] >>> df.filter(scenario="a_scenario") <scmdata.ScmRun (timeseries: 2, timepoints: 3)> Time: Start: 2005-01-01T00:00:00 End: 2015-01-01T00:00:00 Meta: model scenario region variable unit climate_model 0 a_iam a_scenario World Primary Energy EJ/yr a_model 1 a_iam a_scenario World Primary Energy|Coal EJ/yr a_model [2 rows x 7 columns] >>> df.filter(scenario="a_scenario", keep=False) <scmdata.ScmRun (timeseries: 1, timepoints: 3)> Time: Start: 2005-01-01T00:00:00 End: 2015-01-01T00:00:00 Meta: model scenario region variable unit climate_model 2 a_iam a_scenario2 World Primary Energy EJ/yr a_model [1 rows x 7 columns] >>> df.filter(level=1) <scmdata.ScmRun (timeseries: 2, timepoints: 3)> Time: Start: 2005-01-01T00:00:00 End: 2015-01-01T00:00:00 Meta: model scenario region variable unit climate_model 0 a_iam a_scenario World Primary Energy EJ/yr a_model 2 a_iam a_scenario2 World Primary Energy EJ/yr a_model [2 rows x 7 columns] >>> df.filter(year=range(2000, 2011)) <scmdata.ScmRun (timeseries: 3, timepoints: 2)> Time: Start: 2005-01-01T00:00:00 End: 2010-01-01T00:00:00 Meta: model scenario region variable unit climate_model 0 a_iam a_scenario World Primary Energy EJ/yr a_model 1 a_iam a_scenario World Primary Energy|Coal EJ/yr a_model 2 a_iam a_scenario2 World Primary Energy EJ/yr a_model [2 rows x 7 columns]
- Parameters
keep (
bool
) – If True, keep all timeseries satisfying the filters, otherwise drop all the timeseries satisfying the filtersinplace (
bool
) – If True, do operation inplace and return Nonelog_if_empty (
bool
) – IfTrue
, log a warning level message if the result is empty.**kwargs –
Argument names are keys with which to filter, values are used to do the filtering. Filtering can be done on:
all metadata columns with strings, “*” can be used as a wildcard in search strings
’level’: the maximum “depth” of IAM variables (number of hierarchy levels, excluding the strings given in the ‘variable’ argument)
’time’: takes a
datetime.datetime
or list ofdatetime.datetime
’s TODO: default to np.datetime64’year’, ‘month’, ‘day’, hour’: takes an
int
or list ofint
’s (‘month’ and ‘day’ also acceptstr
or list ofstr
)
If
regexp=True
is included inkwargs
then the pseudo-regexp syntax inpattern_match()
is disabled.
- Returns
If not
inplace
, return a new instance with the filtered data.- Return type
ScmRun
- classmethod from_nc(fname)¶
Read a netCDF4 file from disk
- Parameters
fname (str) – Filename to read
See also
scmdata.run.ScmRun.from_nc()
- get_unique_meta(meta, no_duplicates=False)¶
Get unique values in a metadata column.
- Parameters
meta (
str
) – Column to retrieve metadata forno_duplicates (
Optional
[bool
]) – Should I raise an error if there is more than one unique value in the metadata column?
- Raises
ValueError – There is more than one unique value in the metadata column and
no_duplicates
isTrue
.KeyError – If a
meta
column does not exist in the run’s metadata
- Returns
List of unique metadata values. If
no_duplicates
isTrue
the metadata value will be returned (rather than a list).- Return type
[List[Any], Any]
- groupby(*group)¶
Group the object by unique metadata
Enables iteration over groups of data. For example, to iterate over each scenario in the object
>>> for group in df.groupby("scenario"): >>> print(group) <scmdata.ScmRun (timeseries: 2, timepoints: 3)> Time: Start: 2005-01-01T00:00:00 End: 2015-01-01T00:00:00 Meta: model scenario region variable unit climate_model 0 a_iam a_scenario World Primary Energy EJ/yr a_model 1 a_iam a_scenario World Primary Energy|Coal EJ/yr a_model <scmdata.ScmRun (timeseries: 1, timepoints: 3)> Time: Start: 2005-01-01T00:00:00 End: 2015-01-01T00:00:00 Meta: model scenario region variable unit climate_model 2 a_iam a_scenario2 World Primary Energy EJ/yr a_model
- Parameters
group (str or list of str) – Columns to group by
- Returns
See the documentation for
RunGroupBy
for more information- Return type
RunGroupBy
- head(*args, **kwargs)¶
Return head of
self.timeseries()
.- Parameters
*args – Passed to
self.timeseries().head()
**kwargs – Passed to
self.timeseries().head()
- Returns
Tail of
self.timeseries()
- Return type
pandas.DataFrame
- integrate(out_var=None)¶
Integrate with respect to time
- Parameters
out_var (str) – If provided, the variable column of the output is set equal to
out_var
. Otherwise, the output variables are equal to the input variables, prefixed with “Cumulative ” .- Returns
scmdata.ScmRun
containing the integral ofself
with respect to time- Return type
scmdata.ScmRun
- Warns
UserWarning – The data being integrated contains nans. If this happens, the output data will also contain nans.
- interpolate(target_times, interpolation_type='linear', extrapolation_type='linear')¶
Interpolate the dataframe onto a new time frame.
- Parameters
target_times (
Union
[ndarray
,List
[Union
[datetime
,int
]]]) – Time grid onto which to interpolateinterpolation_type (str) – Interpolation type. Options are ‘linear’
extrapolation_type (str or None) – Extrapolation type. Options are None, ‘linear’ or ‘constant’
- Returns
A new
ScmRun
containing the data interpolated onto thetarget_times
grid- Return type
ScmRun
- line_plot(**kwargs)¶
Make a line plot via seaborn’s lineplot
Deprecated: use
lineplot()
instead- Parameters
**kwargs – Keyword arguments to be passed to
seaborn.lineplot
. If none are passed, sensible defaults will be used.- Returns
Output of call to
seaborn.lineplot
- Return type
matplotlib.axes._subplots.AxesSubplot
- linear_regression()¶
Calculate linear regression of each timeseries
Note
Times in seconds since 1970-01-01 are used as the x-axis for the regressions. Such values can be accessed with
self.time_points.values.astype("datetime64[s]").astype("int")
. This decision does not matter for the gradients, but is important for the intercept values.- Returns
list of dict[str – List of dictionaries. Each dictionary contains the metadata for the timeseries plus the gradient (with key
"gradient"
) and intercept ( with key"intercept"
). The gradient and intercept are stored aspint.Quantity
.- Return type
Any]
- linear_regression_gradient(unit=None)¶
Calculate gradients of a linear regression of each timeseries
- Parameters
unit (str) – Output unit for gradients. If not supplied, the gradients’ units will not be converted to a common unit.
- Returns
self.meta
plus a column with the value of the gradient for each timeseries. The"unit"
column is updated to show the unit of the gradient.- Return type
pandas.DataFrame
- linear_regression_intercept(unit=None)¶
Calculate intercepts of a linear regression of each timeseries
Note
Times in seconds since 1970-01-01 are used as the x-axis for the regressions. Such values can be accessed with
self.time_points.values.astype("datetime64[s]").astype("int")
. This decision does not matter for the gradients, but is important for the intercept values.- Parameters
unit (str) – Output unit for gradients. If not supplied, the gradients’ units will not be converted to a common unit.
- Returns
self.meta
plus a column with the value of the gradient for each timeseries. The"unit"
column is updated to show the unit of the gradient.- Return type
pandas.DataFrame
- linear_regression_scmrun()¶
Re-calculate the timeseries based on a linear regression
- Returns
The timeseries, re-calculated based on a linear regression
- Return type
scmdata.ScmRun
- lineplot(time_axis=None, **kwargs)¶
Make a line plot via seaborn’s lineplot
If only a single unit is present, it will be used as the y-axis label. The axis object is returned so this can be changed by the user if desired.
- Parameters
time_axis ({None, "year", "year-month", "days since 1970-01-01", "seconds since 1970-01-01"} # noqa: E501) –
Time axis to use for the plot.
If
None
,datetime.datetime
objects will be used.If
"year"
, the year of each time point will be used.If
"year-month"
, the year plus (month - 0.5) / 12 will be used.If
"days since 1970-01-01"
, the number of days since 1st Jan 1970 will be used (calculated using thedatetime
module).If
"seconds since 1970-01-01"
, the number of seconds since 1st Jan 1970 will be used (calculated using thedatetime
module).**kwargs – Keyword arguments to be passed to
seaborn.lineplot
. If none are passed, sensible defaults will be used.
- Returns
Output of call to
seaborn.lineplot
- Return type
matplotlib.axes._subplots.AxesSubplot
- long_data(time_axis=None)¶
Return data in long form, particularly useful for plotting with seaborn
- Parameters
time_axis ({None, "year", "year-month", "days since 1970-01-01", "seconds since 1970-01-01"}) –
Time axis to use for the output’s columns.
If
None
,datetime.datetime
objects will be used.If
"year"
, the year of each time point will be used.If
"year-month"
, the year plus (month - 0.5) / 12 will be used.If
"days since 1970-01-01"
, the number of days since 1st Jan 1970 will be used (calculated using thedatetime
module).If
"seconds since 1970-01-01"
, the number of seconds since 1st Jan 1970 will be used (calculated using thedatetime
module).- Returns
pandas.DataFrame
containing the data in ‘long form’ (i.e. one observation per row).- Return type
pandas.DataFrame
- property meta: pandas.core.frame.DataFrame¶
Metadata
- Return type
DataFrame
- property meta_attributes¶
Get a list of all meta keys
- Returns
Sorted list of meta keys
- Return type
list
- multiply(other, op_cols, **kwargs)¶
Multiply values
- Parameters
other (
ScmRun
) –ScmRun
containing data to multiplyop_cols (dict of str: str) – Dictionary containing the columns to drop before multiplying as the keys and the value those columns should hold in the output as the values. For example, if we have
op_cols={"variable": "Emissions|CO2 - Emissions|CO2|Fossil"}
then the multiplication will be performed with an index that uses all columns except the “variable” column and the output will have a “variable” column with the value “Emissions|CO2 - Emissions|CO2|Fossil”.**kwargs (any) – Passed to
prep_for_op()
- Returns
Product of
self
andother
, usingop_cols
to define the columns which should be dropped before the data is aligned and to define the value of these columns in the output.- Return type
ScmRun
Examples
>>> import numpy as np >>> from scmdata import ScmRun >>> >>> IDX = [2010, 2020, 2030] >>> >>> >>> start = ScmRun( ... data=np.arange(18).reshape(3, 6), ... index=IDX, ... columns={ ... "variable": [ ... "Emissions|CO2|Fossil", ... "Emissions|CO2|AFOLU", ... "Emissions|CO2|Fossil", ... "Emissions|CO2|AFOLU", ... "Cumulative Emissions|CO2", ... "Surface Air Temperature Change", ... ], ... "unit": ["GtC / yr", "GtC / yr", "GtC / yr", "GtC / yr", "GtC", "K"], ... "region": ["World|NH", "World|NH", "World|SH", "World|SH", "World", "World"], ... "model": "idealised", ... "scenario": "idealised", ... }, ... ) >>> >>> start.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 variable unit region model scenario Emissions|CO2|Fossil GtC / yr World|NH idealised idealised 0.0 6.0 12.0 Emissions|CO2|AFOLU GtC / yr World|NH idealised idealised 1.0 7.0 13.0 Emissions|CO2|Fossil GtC / yr World|SH idealised idealised 2.0 8.0 14.0 Emissions|CO2|AFOLU GtC / yr World|SH idealised idealised 3.0 9.0 15.0 Cumulative Emissions|CO2 GtC World idealised idealised 4.0 10.0 16.0 >>> fos = start.filter(variable="*Fossil") >>> fos.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 variable unit region model scenario Emissions|CO2|Fossil GtC / yr World|NH idealised idealised 0.0 6.0 12.0 World|SH idealised idealised 2.0 8.0 14.0 >>> >>> afolu = start.filter(variable="*AFOLU") >>> afolu.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 variable unit region model scenario Emissions|CO2|AFOLU GtC / yr World|NH idealised idealised 1.0 7.0 13.0 World|SH idealised idealised 3.0 9.0 15.0 >>> >>> fos_times_afolu = fos.multiply( ... afolu, op_cols={"variable": "Emissions|CO2|Fossil : AFOLU"} ... ) >>> fos_times_afolu.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 model scenario region variable unit idealised idealised World|NH Emissions|CO2|Fossil : AFOLU gigatC ** 2 / a ** 2 0.0 42.0 156.0 World|SH Emissions|CO2|Fossil : AFOLU gigatC ** 2 / a ** 2 6.0 72.0 210.0
- plumeplot(ax=None, quantiles_plumes=[((0.05, 0.95), 0.5), ((0.5,), 1.0)], hue_var='scenario', hue_label='Scenario', palette=None, style_var='variable', style_label='Variable', dashes=None, linewidth=2, time_axis=None, pre_calculated=False, quantile_over=('ensemble_member',))¶
Make a plume plot, showing plumes for custom quantiles
- Parameters
ax (
matplotlib.axes._subplots.AxesSubplot
) – Axes on which to make the plotquantiles_plumes (list[tuple[tuple, float]]) – Configuration to use when plotting quantiles. Each element is a tuple, the first element of which is itself a tuple and the second element of which is the alpha to use for the quantile. If the first element has length two, these two elements are the quantiles to plot and a plume will be made between these two quantiles. If the first element has length one, then a line will be plotted to represent this quantile.
hue_var (str) – The column of
self.meta
which should be used to distinguish different hues.hue_label (str) – Label to use in the legend for
hue_var
.palette (dict) – Dictionary defining the colour to use for different values of
hue_var
.style_var (str) – The column of
self.meta
which should be used to distinguish different styles.style_label (str) – Label to use in the legend for
style_var
.dashes (dict) – Dictionary defining the style to use for different values of
style_var
.linewidth (float) – Width of lines to use (for quantiles which are not to be shown as plumes)
time_axis (str) – Time axis to use for the plot (see
timeseries()
)pre_calculated (bool) – Are the quantiles pre-calculated? If no, the quantiles will be calculated within this function. Pre-calculating the quantiles using
ScmRun.quantiles_over()
can lead to faster plotting if multiple plots are to be made with the same quantiles.quantile_over (str, tuple[str]) – Columns of
self.meta
over which the quantiles should be calculated. Only used ifpre_calculated
isFalse
.
- Returns
Axes on which the plot was made and the legend items we have made (in case the user wants to move the legend to a different position for example)
- Return type
matplotlib.axes._subplots.AxesSubplot
, list
Examples
>>> scmrun = ScmRun( ... data=np.random.random((10, 3)).T, ... columns={ ... "model": ["a_iam"], ... "climate_model": ["a_model"] * 5 + ["a_model_2"] * 5, ... "scenario": ["a_scenario"] * 5 + ["a_scenario_2"] * 5, ... "ensemble_member": list(range(5)) + list(range(5)), ... "region": ["World"], ... "variable": ["Surface Air Temperature Change"], ... "unit": ["K"], ... }, ... index=[2005, 2010, 2015], ... )
Plot the plumes, calculated over the different ensemble members.
>>> scmrun.plumeplot(quantile_over="ensemble_member")
Pre-calculate the quantiles, then plot
>>> summary_stats = ScmRun( ... scmrun.quantiles_over("ensemble_member", quantiles=quantiles) ... ) >>> summary_stats.plumeplot(pre_calculated=True)
Note
scmdata
is not a plotting library so this function is provided as is, with little testing. In some ways, it is more intended as inspiration for other users than as a robust plotting tool.
- process_over(cols, operation, na_override=- 1000000.0, **kwargs)¶
Process the data over the input columns.
- Parameters
cols (
Union
[str
,List
[str
]]) – Columns to perform the operation on. The timeseries will be grouped by all other columns inmeta
.operation (str or func) –
The operation to perform.
If a string is provided, the equivalent pandas groupby function is used. Note that not all groupby functions are available as some do not make sense for this particular application. Additional information about the arguments for the pandas groupby functions can be found at <https://pandas.pydata.org/pan das-docs/stable/reference/groupby.html>`_.
If a function is provided, it will be applied to each group. The function must take a dataframe as its first argument and return a DataFrame, Series or scalar.
Note that quantile means the value of the data at a given point in the cumulative distribution of values at each point in the timeseries, for each timeseries once the groupby is applied. As a result, using
q=0.5
is the same as taking the median and not the same as taking the mean/average.na_override ([int, float]) –
Convert any nan value in the timeseries meta to this value during processsing. The meta values converted back to nan’s before the dataframe is returned. This should not need to be changed unless the existing metadata clashes with the default na_override value.
This functionality is disabled if na_override is None, but may result incorrect results if the timeseries meta includes any nan’s.
**kwargs – Keyword arguments to pass to the pandas operation
- Returns
The quantiles of the timeseries, grouped by all columns in
meta
other thancols
- Return type
pandas.DataFrame
- Raises
ValueError – If the operation is not an allowed operation If the value of na_override clashes with any existing metadata
- quantiles_over(cols, quantiles, **kwargs)¶
Calculate quantiles of the data over the input columns.
- Parameters
cols (
Union
[str
,List
[str
]]) – Columns to perform the operation on. The timeseries will be grouped by all other columns inmeta
.quantiles (
Union
[str
,List
[float
]]) – The quantiles to calculate. This should be a list of quantiles to calculate (quantile values between 0 and 1).quantiles
can also include the strings “median” or “mean” if these values are to be calculated.**kwargs – Passed to
process_over()
.
- Returns
The quantiles of the timeseries, grouped by all columns in
meta
other thancols
. Each calculated quantile is given a label which is stored in thequantile
column within the output index.- Return type
pandas.DataFrame
- Raises
TypeError –
operation
is included inkwargs
. The operation is inferred fromquantiles
.
- reduce(func, dim=None, axis=None, **kwargs)¶
Apply a function along a given axis
This is to provide the GroupBy functionality in
ScmRun.groupby()
and is not generally called directly.This implementation is very bare-bones - no reduction along the time time dimension is allowed and only the dim parameter is used.
- Parameters
func (function) –
dim (str) – Ignored
axis (int) – The dimension along which the function is applied. The only valid value is 0 which corresponds to the along the time-series dimension.
kwargs – Other parameters passed to func
- Returns
- Return type
ScmRun
- Raises
ValueError – If a dimension other than None is provided
NotImplementedError – If axis is anything other than 0
- relative_to_ref_period_mean(append_str=None, **kwargs)¶
Return the timeseries relative to a given reference period mean.
The reference period mean is subtracted from all values in the input timeseries.
- Parameters
- Returns
New object containing the timeseries, adjusted to the reference period mean. The reference period year bounds are stored in the meta columns
"reference_period_start_year"
and"reference_period_end_year"
.- Return type
ScmRun
- Raises
NotImplementedError –
append_str
is notNone
- required_cols = ('model', 'scenario', 'region', 'variable', 'unit')¶
Minimum metadata columns required by an ScmRun.
If an application requires a different set of required metadata, this can be specified by overriding
required_cols
on a custom class inheritingscmdata.run.BaseScmRun
. Note that at a minimum, (“variable”, “unit”) columns are required.
- resample(rule='AS', **kwargs)¶
Resample the time index of the timeseries data onto a custom grid.
This helper function allows for values to be easily interpolated onto annual or monthly timesteps using the rules=’AS’ or ‘MS’ respectively. Internally, the interpolate function performs the regridding.
- Parameters
rule (
str
) – See the pandas user guide for a list of options. Note that Business-related offsets such as “BusinessDay” are not supported.**kwargs – Other arguments to pass through to
interpolate()
- Returns
New
ScmRun
instance on a new time index- Return type
ScmRun
Examples
Resample a dataframe to annual values
>>> scm_df = ScmRun( ... pd.Series([1, 2, 10], index=(2000, 2001, 2009)), ... columns={ ... "model": ["a_iam"], ... "scenario": ["a_scenario"], ... "region": ["World"], ... "variable": ["Primary Energy"], ... "unit": ["EJ/y"], ... } ... ) >>> scm_df.timeseries().T model a_iam scenario a_scenario region World variable Primary Energy unit EJ/y year 2000 1 2010 10
An annual timeseries can be the created by interpolating to the start of years using the rule ‘AS’.
>>> res = scm_df.resample('AS') >>> res.timeseries().T model a_iam scenario a_scenario region World variable Primary Energy unit EJ/y time 2000-01-01 00:00:00 1.000000 2001-01-01 00:00:00 2.001825 2002-01-01 00:00:00 3.000912 2003-01-01 00:00:00 4.000000 2004-01-01 00:00:00 4.999088 2005-01-01 00:00:00 6.000912 2006-01-01 00:00:00 7.000000 2007-01-01 00:00:00 7.999088 2008-01-01 00:00:00 8.998175 2009-01-01 00:00:00 10.00000
>>> m_df = scm_df.resample('MS') >>> m_df.timeseries().T model a_iam scenario a_scenario region World variable Primary Energy unit EJ/y time 2000-01-01 00:00:00 1.000000 2000-02-01 00:00:00 1.084854 2000-03-01 00:00:00 1.164234 2000-04-01 00:00:00 1.249088 2000-05-01 00:00:00 1.331204 2000-06-01 00:00:00 1.416058 2000-07-01 00:00:00 1.498175 2000-08-01 00:00:00 1.583029 2000-09-01 00:00:00 1.667883 ... 2008-05-01 00:00:00 9.329380 2008-06-01 00:00:00 9.414234 2008-07-01 00:00:00 9.496350 2008-08-01 00:00:00 9.581204 2008-09-01 00:00:00 9.666058 2008-10-01 00:00:00 9.748175 2008-11-01 00:00:00 9.833029 2008-12-01 00:00:00 9.915146 2009-01-01 00:00:00 10.000000 [109 rows x 1 columns]
Note that the values do not fall exactly on integer values as not all years are exactly the same length.
References
See the pandas documentation for resample <http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas. Series.resample.html> for more information about possible arguments.
- property shape: tuple¶
Get the shape of the underlying data as
(num_timeseries, num_timesteps)
- Returns
- Return type
tuple of int
- subtract(other, op_cols, **kwargs)¶
Subtract values
- Parameters
other (
ScmRun
) –ScmRun
containing data to subtractop_cols (dict of str: str) – Dictionary containing the columns to drop before subtracting as the keys and the value those columns should hold in the output as the values. For example, if we have
op_cols={"variable": "Emissions|CO2 - Emissions|CO2|Fossil"}
then the subtraction will be performed with an index that uses all columns except the “variable” column and the output will have a “variable” column with the value “Emissions|CO2 - Emissions|CO2|Fossil”.**kwargs (any) – Passed to
prep_for_op()
- Returns
Difference between
self
andother
, usingop_cols
to define the columns which should be dropped before the data is aligned and to define the value of these columns in the output.- Return type
ScmRun
Examples
>>> import numpy as np >>> from scmdata import ScmRun >>> >>> IDX = [2010, 2020, 2030] >>> >>> >>> start = ScmRun( ... data=np.arange(18).reshape(3, 6), ... index=IDX, ... columns={ ... "variable": [ ... "Emissions|CO2|Fossil", ... "Emissions|CO2|AFOLU", ... "Emissions|CO2|Fossil", ... "Emissions|CO2|AFOLU", ... "Cumulative Emissions|CO2", ... "Surface Air Temperature Change", ... ], ... "unit": ["GtC / yr", "GtC / yr", "GtC / yr", "GtC / yr", "GtC", "K"], ... "region": ["World|NH", "World|NH", "World|SH", "World|SH", "World", "World"], ... "model": "idealised", ... "scenario": "idealised", ... }, ... ) >>> >>> start.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 variable unit region model scenario Emissions|CO2|Fossil GtC / yr World|NH idealised idealised 0.0 6.0 12.0 Emissions|CO2|AFOLU GtC / yr World|NH idealised idealised 1.0 7.0 13.0 Emissions|CO2|Fossil GtC / yr World|SH idealised idealised 2.0 8.0 14.0 Emissions|CO2|AFOLU GtC / yr World|SH idealised idealised 3.0 9.0 15.0 Cumulative Emissions|CO2 GtC World idealised idealised 4.0 10.0 16.0 >>> fos = start.filter(variable="*Fossil") >>> fos.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 variable unit region model scenario Emissions|CO2|Fossil GtC / yr World|NH idealised idealised 0.0 6.0 12.0 World|SH idealised idealised 2.0 8.0 14.0 >>> >>> afolu = start.filter(variable="*AFOLU") >>> afolu.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 variable unit region model scenario Emissions|CO2|AFOLU GtC / yr World|NH idealised idealised 1.0 7.0 13.0 World|SH idealised idealised 3.0 9.0 15.0 >>> >>> fos_minus_afolu = fos.subtract( ... afolu, op_cols={"variable": "Emissions|CO2|Fossil - AFOLU"} ... ) >>> fos_minus_afolu.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 model scenario region variable unit idealised idealised World|NH Emissions|CO2|Fossil - AFOLU gigatC / a -1.0 -1.0 -1.0 World|SH Emissions|CO2|Fossil - AFOLU gigatC / a -1.0 -1.0 -1.0 >>> >>> nh_minus_sh = nh.subtract(sh, op_cols={"region": "World|NH - SH"}) >>> nh_minus_sh.head() time 2010-01-01 00:00:00 2020-01-01 00:00:00 2030-01-01 00:00:00 model scenario region variable unit idealised idealised World|NH - SH Emissions|CO2|Fossil gigatC / a -2.0 -2.0 -2.0 Emissions|CO2|AFOLU gigatC / a -2.0 -2.0 -2.0
- tail(*args, **kwargs)¶
Return tail of
self.timeseries()
.- Parameters
*args – Passed to
self.timeseries().tail()
**kwargs – Passed to
self.timeseries().tail()
- Returns
Tail of
self.timeseries()
- Return type
pandas.DataFrame
- time_mean(rule)¶
Take time mean of self
Note that this method will not copy the
metadata
attribute to the returned value.- Parameters
rule (["AC", "AS", "A"]) –
How to take the time mean. The names reflect the pandas user guide where they can, but only the options given above are supported. For clarity, if
rule
is'AC'
, then the mean is an annual mean i.e. each time point in the result is the mean of all values for that particular year. Ifrule
is'AS'
, then the mean is an annual mean centred on the beginning of the year i.e. each time point in the result is the mean of all values from July 1st in the previous year to June 30 in the given year. Ifrule
is'A'
, then the mean is an annual mean centred on the end of the year i.e. each time point in the result is the mean of all values from July 1st of the given year to June 30 in the next year.- Returns
The time mean of
self
.- Return type
ScmRun
- property time_points¶
Time points of the data
- Returns
- Return type
scmdata.time.TimePoints
- timeseries(meta=None, check_duplicated=True, time_axis=None, drop_all_nan_times=False)¶
Return the data with metadata as a
pandas.DataFrame
.- Parameters
meta (list[str]) – The list of meta columns that will be included in the output’s MultiIndex. If None (default), then all metadata will be used.
check_duplicated (bool) – If True, an exception is raised if any of the timeseries have duplicated metadata
time_axis ({None, "year", "year-month", "days since 1970-01-01", "seconds since 1970-01-01"}) – See
long_data()
for a description of the options.drop_all_nan_times (bool) – Should time points which contain only nan values be dropped? This operation is applied after any transforms introduced by the value of
time_axis
.
- Returns
DataFrame with datetimes as columns and timeseries as rows. Metadata is in the index.
- Return type
pandas.DataFrame
- Raises
NonUniqueMetadataError – If the metadata are not unique between timeseries and
check_duplicated
isTrue
NotImplementedError – The value of time_axis is not recognised
ValueError – The value of time_axis would result in columns which aren’t unique
- to_csv(fname, **kwargs)¶
Write timeseries data to a csv file
- Parameters
fname (
str
) – Path to write the file into- Return type
None
- to_iamdataframe()¶
Convert to a
LongDatetimeIamDataFrame
instance.LongDatetimeIamDataFrame
is a subclass ofpyam.IamDataFrame
. We useLongDatetimeIamDataFrame
to ensure all times can be handled, see docstring ofLongDatetimeIamDataFrame
for details.- Returns
LongDatetimeIamDataFrame
instance containing the same data.- Return type
LongDatetimeIamDataFrame
- Raises
ImportError – If pyam is not installed
- to_nc(fname, dimensions=('region',), extras=(), **kwargs)¶
Write timeseries to disk as a netCDF4 file
Each unique variable will be written as a variable within the netCDF file. Choosing the dimensions and extras such that there are as few empty (or nan) values as possible will lead to the best compression on disk.
- Parameters
fname (str) – Path to write the file into
dimensions (iterable of str) – Dimensions to include in the netCDF file. The time dimension is always included (if not provided it will be the last dimension). An additional dimension (specifically a co-ordinate in xarray terms), “_id”, will be included if
extras
is provided and any of the metadata inextras
is not uniquely defined bydimensions
. “_id” maps the timeseries in each variable to their relevant metadata.extras (iterable of str) – Metadata columns to write as variables in the netCDF file (specifically as “non-dimension co-ordinates” in xarray terms, see xarray terminology for more details). Where possible, these non-dimension co-ordinates will use dimension co-ordinates as their own co-ordinates. However, if the metadata in
extras
is not defined by a single dimension indimensions
, then theextras
co-ordinates will have dimensions of “_id”. This “_id” co-ordinate maps the values in theextras
co-ordinates to each timeseries in the serialised dataset. Where “_id” is required, an extra “_id” dimension will also be added todimensions
.kwargs – Passed through to
xarray.Dataset.to_netcdf()
See also
scmdata.run.ScmRun.to_nc()
- to_xarray(dimensions=('region',), extras=(), unify_units=True)¶
Convert to a
xarray.Dataset
- Parameters
dimensions (iterable of str) –
- Dimensions for each variable in the returned dataset. If an “_id” co-ordinate is
required (see
extras
documentation for when “_id” is required) and is not included indimensions
then it will be the last dimension (or second last dimension if “time” is also not included indimensions
). If “time” is not included indimensions
it will be the last dimension.
extras (iterable of str) –
Columns in
self.meta
from which to create “non-dimension co-ordinates” (see xarray terminology for more details). These non-dimension co-ordinates store extra information and can be mapped to each timeseries found in the data variables of the outputxarray.Dataset
. Where possible, these non-dimension co-ordinates will use dimension co-ordinates as their own co-ordinates. However, if the metadata inextras
is not defined by a single dimension indimensions
, then theextras
co-ordinates will have dimensions of “_id”. This “_id” co-ordinate maps the values in theextras
co-ordinates to each timeseries in the serialised dataset. Where “_id” is required, an extra “_id” dimension will also be added todimensions
.unify_units (bool) – If a given variable has multiple units, should we attempt to unify them?
- Returns
Data in self, re-formatted as an
xarray.Dataset
- Return type
xarray.Dataset
- Raises
ValueError – If a variable has multiple units and
unify_units
isFalse
.ValueError – If a variable has multiple units which are not able to be converted to a common unit because they have different base units.
- property values: numpy.ndarray¶
Timeseries values without metadata
The values are returned such that each row is a different timeseries being a row and each column is a different time (although no time information is included as a plain
numpy.ndarray
is returned).- Returns
The array in the same shape as
ScmRun.shape()
, that is(num_timeseries, num_timesteps)
.- Return type
np.ndarray
- write(filepath, magicc_version)[source]¶
Write an input file to disk.
For more information on file conventions, see MAGICC file conventions.
- Parameters
filepath (str) – Filepath of the file to write.
magicc_version (int) – The MAGICC version for which we want to write files. MAGICC7 and MAGICC6 namelists are incompatible hence we need to know which one we’re writing for.
- pymagicc.io.UNSUPPORTED_OUT_FILES = ['CARBONCYCLE.*OUT', 'PF\\_.*OUT', 'DATBASKET_.*', '.*INVERSE\\_.*EMIS.*OUT', '.*INVERSEEMIS\\.BINOUT', 'PRECIPINPUT.*OUT', 'TEMP_OCEANLAYERS.*\\.BINOUT', 'TIMESERIESMIX.*OUT', 'SUMMARY_INDICATORS.OUT']¶
List of regular expressions which define output files we cannot read.
These files are nasty to read and not that useful hence are unsupported. The solution for these files is to fix the output format rather than hacking the readers. Obviously that doesn’t help for the released MAGICC6 binary but there is nothing we can do there. For MAGICC7, we should have a much nicer set.
Some more details about why these files are not supported:
CARBONCYCLE.OUT
has no units and we don’t want to hardcode themSub annual binary files (including volcanic RF) are asking for trouble
Permafrost output files don’t make any sense right now
Output baskets have inconsistent variable names from other outputs
Inverse emissions files (except INVERSEEMIS.OUT) have no units and we don’t want to hardcode them
We have no idea what the precipitation input is
Temp ocean layers is hard to predict because it has many layers
Time series mix output files don’t have units or regions
Summary indicator files are a brand new format for little gain
- Type
list
- pymagicc.io.determine_tool(filepath, tool_to_get)[source]¶
Determine the tool to use for reading/writing.
The function uses an internally defined set of mappings between filepaths, regular expresions and readers/writers to work out which tool to use for a given task, given the filepath.
It is intended for internal use only, but is public because of its importance to the input/output of pymagicc.
If it fails, it will give clear error messages about why and what the available regular expressions are.
>>> mdata = MAGICCData() >>> mdata.read(MAGICC7_DIR, HISTRCP_CO2I_EMIS.txt) ValueError: Couldn't find appropriate writer for HISTRCP_CO2I_EMIS.txt. The file must be one of the following types and the filepath must match its corresponding regular expression: SCEN: ^.*\.SCEN$ SCEN7: ^.*\.SCEN7$ prn: ^.*\.prn$
- Parameters
filepath (str) – Name of the file to read/write, including extension
tool_to_get (str_check_file_exists) – The tool to get, valid options are “reader”, “writer”. Invalid values will throw a NoReaderWriterError.
- pymagicc.io.get_generic_rcp_name(inname)[source]¶
Convert an RCP name into the generic Pymagicc RCP name
The conversion is case insensitive.
- Parameters
inname (str) – The name for which to get the generic Pymagicc RCP name
- Returns
The generic Pymagicc RCP name
- Return type
str
Examples
>>> get_generic_rcp_name("RCP3PD") "rcp26"
- pymagicc.io.pull_cfg_from_parameters_out(parameters_out, namelist_to_read='nml_allcfgs')[source]¶
Pull out a single config set from a parameters_out namelist.
This function returns a single file with the config that needs to be passed to MAGICC in order to do the same run as is represented by the values in
parameters_out
.- Parameters
parameters_out (dict, f90nml.Namelist) – The parameters to dump
namelist_to_read (str) – The namelist to read from the file.
- Returns
An f90nml object with the cleaned, read out config.
- Return type
f90nml.Namelist
Examples
>>> cfg = pull_cfg_from_parameters_out(magicc.metadata["parameters"]) >>> cfg.write("/somewhere/else/ANOTHERNAME.cfg")
- pymagicc.io.pull_cfg_from_parameters_out_file(parameters_out_file, namelist_to_read='nml_allcfgs')[source]¶
Pull out a single config set from a MAGICC
PARAMETERS.OUT
file.This function reads in the
PARAMETERS.OUT
file and returns a single file with the config that needs to be passed to MAGICC in order to do the same run as is represented by the values inPARAMETERS.OUT
.- Parameters
parameters_out_file (str) – The
PARAMETERS.OUT
file to readnamelist_to_read (str) – The namelist to read from the file.
- Returns
An f90nml object with the cleaned, read out config.
- Return type
f90nml.Namelist
Examples
>>> cfg = pull_cfg_from_parameters_out_file("PARAMETERS.OUT") >>> cfg.write("/somewhere/else/ANOTHERNAME.cfg")
- pymagicc.io.read_cfg_file(filepath)[source]¶
Read a MAGICC
.CFG
file, or any other Fortran namelist- Parameters
filepath (str) – Full path (path and name) to the file to read
- Returns
An f90nml
Namelist
instance which contains the namelists in the file. ANamelist
can be accessed just like a dictionary.- Return type
f90nml.Namelist
- pymagicc.io.read_mag_file_metadata(filepath)[source]¶
Read only the metadata in a
.MAG
fileThis provides a way to access a
.MAG
file’s metadata without reading the entire datablock, significantly reducing read time.- Parameters
filepath (str) – Full path (path and name) to the file to read
- Returns
Metadata read from the file
- Return type
dict
- Raises
ValueError – The file is not a
.MAG
file
- pymagicc.io.to_int(x)[source]¶
Convert inputs to int and check conversion is sensible
- Parameters
x (
np.array
) – Values to convert- Returns
Input, converted to int
- Return type
np.array
ofint
- Raises
ValueError – If the int representation of any of the values is not equal to its original representation (where equality is checked using the
!=
operator).TypeError – x is not a
np.ndarray