File I/O (spike_sort.io
)¶
Functions for reading and writing datafiles.
Read/Write Filters (spike_sort.io.filters
)¶
Filters are basic backends for read/write operations. They offer following methods:
read_spt()
– read event times (such as spike times)read_sp()
– read raw spike waveformswrite_spt()
– write spike timeswrite_sp()
– write raw spike waveforms
The read_* methods take usually one argument (datapath), but it is not required. The write_* methods take datapath and the data to be written.
If you want to read/write you custom data format, it is enough that you implement a class with these functions.
The following filters are implemented:
filters.BakerlabFilter (conf_file) |
Filter for custom binary data structure. |
filters.PyTablesFilter (fname[, mode]) |
Read/Write HDF5 |
Export tools (spike_sort.io.export
)¶
These tolls take one of the io.filters as an argument and export data to the file using write_spt or write_sp methods.
export.export_cells (io_filter, node_templ, …) |
Export discriminated spike times of all cells to a file. |
Reference¶
Filters are basic backends for read/write operations. They offer following methods:
- read_spt – read event times (such as spike times)
- write_spt – write spike times
- read_sp – read raw spike waveforms
- write_sp – write raw spike waveforms
-
class
spike_sort.io.filters.
BakerlabFilter
(conf_file)¶ Filter for custom binary data structure.
The binary data consists of independent files for each contact in each electrode written as 16-bit ints.
The paths to the datafiles are defined in
.inf
file that is JSON-compatible and contains at least the following attributes:- fspike : str
- path to raw recordings relative to dirname
- cell : str
- path to spike times (with resolution 20 us) relative to dirname
- n_contacts : int
- number of contacts per electrode
- dirname : str
- path to the data
- FS : int
- spike sampling frequency
Each of the paths can include any of the following Python formatting placeholders:
- {subject} – subeject name
- {cell_id} – cell id
- {ses_id} – session id
- {el_id} – electrode id
These will be substituted by data extracted from datapath paramaeters of the reading and writing methods.
Parameters: conf_file : str
path to the configuration file
Methods
close
()read_sp
(dataset[, memmap])Reads raw spike waveform from file in bakerlab format read_spt
(dataset)Returns spike times in miliseconds: write_sp
(sp_dict, dataset)Write raw spike waveform to a file in bakerlab format write_spt
(spt_dict, dataset[, overwrite])Write spike times to a binary file. -
read_sp
(dataset, memmap=None)¶ Reads raw spike waveform from file in bakerlab format
Parameters: dataset : str
dataset path (in format /{subject}/session{ses_id}/el{el_id})
memmap : {‘numpy’, ‘tables’, None}, optional
if True use memory mapped arrays to save some memory (defaults to no memmory-mapping)
-
read_spt
(dataset)¶ Returns spike times in miliseconds:
Parameters: dataset : str
dataset path in format /{subject}/session{ses_id}/el{el_id}/cell{cell_id}
-
write_sp
(sp_dict, dataset)¶ Write raw spike waveform to a file in bakerlab format
Parameters: sp_dict : dict
spike waveform dict
dataset : str
dataset path
See also
-
class
spike_sort.io.filters.
PyTablesFilter
(fname, mode='a')¶ Read/Write HDF5
HDF5 is a hierarchical datafile – data is organised in a tree. The standard layout is:
/{SubjectName}/ /{SubjectName}/{SessionName}/{ElectrodeID}/ /{SubjectName}/{SessionName}/{ElectrodeID}/stim: stimulus time /{SubjectName}/{SessionName}/{ElectrodeID}/raw: spike waveforms /{SubjectName}/{SessionName}/{ElectrodeID}/{CellID}: spike waveforms /{SubjectName}/{SessionName}/{ElectrodeID}/{CellID}/spt: spike times
where curly brackets {} denote a group.
This layout may be adjusted by changing paths
Methods
close
()close_all
()read_sp
(dataset)Read continous waveforms (EEG, LFG, spike waveform) read_spt
(dataset)Read event times (such as spike or stimulus times). write_sp
(sp_dict, dataset[, overwrite])Write signal write_spt
(spt_dict, dataset[, overwrite])Write spike times -
read_sp
(dataset)¶ Read continous waveforms (EEG, LFG, spike waveform)
Parameters: dataset : str
path pointing to cell node
-
read_spt
(dataset)¶ Read event times (such as spike or stimulus times).
Parameters: dataset : str
path pointing to cell node
-
write_sp
(sp_dict, dataset, overwrite=False)¶ Write signal
-
write_spt
(spt_dict, dataset, overwrite=False)¶ Write spike times
-
-
spike_sort.io.export.
export_cells
(io_filter, node_templ, spike_times, overwrite=False)¶ Export discriminated spike times of all cells to a file.
Parameters: io_filter : object,
read/write filter object (see
spike_sort.io.filters
)node_templ : string
string identifing the dataset name. It will be passed to IOFilters.write_spt method. It can contain the {cell_id} placeholder that will be substituted by cell identifier.
spt_dict : dict
dictionary in which keys are the cell IDs and values are spike times structures