Jacques is seeking to segment and follow cells over time.
Steps performed so far:
Jacques sent me an image acquired from the microscope and I saved it as ‘test_data/raw.tif’, he also sent me a cellpose model which I saved to the ‘models/’ directory.
Given these as a starting point, let us try to open the image with a fiji instance and split it into a series of smaller files by timepoint.
I have been primarily using this as a way to refresh my brain on python and get current on best practices. Thus, there are decisions I made in this workbook which do not make sense in any other context: e.g. why would anyone have the primary data structure be a dictionary keyed by filename, that is dumb? I chose to do that to refresh myself on playing with dictionaries. By the same token, why would anyone make a dictionary of dataframes only to turn around and concatenate them for usage in geopandas, that is crazy? I did this to get more comfortable with pandas and get out some of my Rish muscle memory.
With that in mind, if we choose to make this a package, the first thing that will need to happen is to rework the base datastructure. I just want anyone who actually reads this code to know that yes, I am a nutter, but a nutter for a reason.
from cellpose import models, io
from cellpose.io import *
from collections import defaultdict
import geopandas
import glob
import imagej
from jpype import JArray, JInt
import matplotlib.pyplot as plt
import multiprocessing as mp
import numpy as np
import os
import pandas
from pandas import DataFrame
from pathlib import Path
import scyjava
import seaborn
import shutil
= Path('/lab/scratch/atb/imaging/mtb_2023').as_posix()
base_dir
os.chdir(base_dir)= Path(f"{base_dir}/test_data/Experiment-1568.czi").as_posix()
input_file 'display.max_columns', None)
pandas.set_option(= True verbose
Note that I am using fiji/imagej from within a virtual environment which I symbolically linked to the local directory ‘venv/’.
As a result, when I initialize fiji, I will call the base directory of the downloaded fiji tree within the virtual environment, which I somewhat erroneously put in bin/.
'-Xmx128g')
scyjava.config.add_option(= os.getcwd()
start_dir = imagej.init(Path('venv/bin/Fiji.app'), mode = 'interactive')
ij True) ij.getApp().getInfo(
## 'ImageJ2 2.9.0/1.53t; Java 1.8.0_322 [amd64]; 117MB of 116508MB'
ij.ui().showUI()## Something about this init() function changes the current working directory.
os.chdir(start_dir) ij.getVersion()
## '2.9.0/1.53t'
= scyjava.jimport('ij.gui.PolygonRoi')
showPolygonRoi = scyjava.jimport('ij.gui.Overlay')
Overlay = scyjava.jimport('net.imglib2.roi.Regions')
Regions = scyjava.jimport('net.imglib2.roi.labeling.LabelRegions')
LabelRegions = scyjava.jimport('ij.plugin.ZProjector')()
ZProjector = Overlay() ov
Open the file, figure out its dimensions, and write portions of it to an output directory.
I wrote this thinking I could parallelize the output writing to 8 cpus. But I think I do not yet understand how python scopes variables in this context and so it did not quite work. I turned that off for the moment and ran it and it finished in about a minute.
The following function may require a test to see if the output directory already exists because scijava will freak out.
def separate_slices(input_file, wanted_x = True, wanted_y = True,
= 1, wanted_channel = 2, cpus = 8,
wanted_z = False):
overwrite """ Slice an image in preparation for cellpose.
Eventually this should be smart enough to handle arbitrary
x,y,z,channels,times as well as able to use multiple cpus for
saving the data. In its current implementation, it only saves 1
z, 1 channel for every frame of an image into a series of files in
its output directory.
"""
= os.path.basename(input_file)
input_base = os.path.dirname(input_file)
input_dir = os.path.splitext(input_base)[0]
input_name = Path(f"{input_dir}/outputs/{input_name}_z{wanted_z}").as_posix()
output_directory = True)
os.makedirs(output_directory, exist_ok if verbose:
print("Starting to open the input file, this takes a moment.")
= ij.io().open(input_file)
raw_dataset if verbose:
print(f"Opened input file, writing images to {output_directory}")
= {}
data_info for element in range(len(raw_dataset.dims)):
= raw_dataset.dims[element]
name = raw_dataset.shape[element]
data_info[name] if verbose:
print(f"This dataset has dimensions: X:{data_info['X']}",
f"Y:{data_info['Y']} Z:{data_info['Z']} Time:{data_info['Time']}")
= []
slices for timepoint in range(data_info['Time']):
= raw_dataset[:, :, wanted_channel, wanted_z, timepoint]
wanted_slice = ij.py.to_dataset(wanted_slice)
slice_data = Path(f"{output_directory}/frame_{timepoint}.tif").as_posix()
output_filename if (os.path.exists(output_filename)):
if overwrite:
print(f"Rewriting {output_filename}")
os.remove(output_filename)= ij.io().save(slice_data, output_filename)
saved else:
if verbose:
print(f"Skipping {output_filename}, it already exists.")
else:
= ij.io().save(slice_data, output_filename)
saved if verbose:
print(f"Saving image {input_name}_{timepoint}.")
slices.append(wanted_slice)
return raw_dataset, slices, output_directory
At this point we should have a directory containing files of individual timepoints. Jacques sent me an initial implementation of the usage of cellpose to call individual cells. Let us include that now. I think the previous function should probably also return the directory of the separated input files.
## Relevant options:
## batch_size(increase for more parallelization), channels(two element list of two element
## channels to segment; the first is the segment, second is optional nucleus;
## internal elements are color channels to query, so [[0,0],[2,3]] means do main cells in
## grayscale and a second with cells in blue, nuclei in green.
## channel_axis, z_axis ? invert (T/F flip pixels from b/w I assume),
## normalize(T/F percentile normalize the data), diameter, do_3d,
## anisotropy (rescaling factor for 3d segmentation), net_avg (average models),
## augment ?, tile ?, resample, interp, flow_threshold, cellprob_threshold (interesting),
## min_size (turned off with -1), stitch_threshold ?, rescale ?.
def invoke_cellpose(input_directory, model_file, channels = [[0, 0]], diameter = 160,
= 0.4, do_3D = False, batch_size = 64, verbose = True):
threshold """ Invoke cellpose using individual slices.
This takes the series of slices from separate_slices() and sends
them to cellpose with a specific model. The dictionary it returns
is the primary datastructure for the various functions which follow.
"""
## Relevant options:
## model_type(cyto, nuclei, cyto2), net_avg(T/F if load built in networks and average them)
= models.CellposeModel(pretrained_model = model_file)
model = get_image_files(input_directory, '_masks', look_one_level_down = False)
files = []
imgs = []
output_masks = []
output_txts = defaultdict(dict)
output_files = 0
existing_files = 0
count for one_file in files:
print(f"Reading {one_file}")
= Path(f"{input_directory}/cellpose").as_posix()
cp_output_directory = True)
os.makedirs(cp_output_directory, exist_ok = os.path.basename(one_file)
f_name = os.path.splitext(f_name)[0]
f_name = Path(f"{input_directory}/{f_name}_cp_masks.png").as_posix()
start_mask = Path(f"{cp_output_directory}/{f_name}_cp_masks.png").as_posix()
output_mask = Path(f"{input_directory}/{f_name}_cp_outlines.txt").as_posix()
start_txt = Path(f"{cp_output_directory}/{f_name}_cp_outlines.txt").as_posix()
output_txt print(f"Adding new txt file: {output_txt}")
'input_file'] = one_file
output_files[f_name]['start_mask'] = start_mask
output_files[f_name]['output_mask'] = output_mask
output_files[f_name]['start_txt'] = start_txt
output_files[f_name]['output_txt'] = output_txt
output_files[f_name]['exists'] = False
output_files[f_name][if (os.path.exists(output_txt)):
= existing_files + 1
existing_files 'exists'] = True
output_files[f_name][else:
= imread(one_file)
img
imgs.append(img)= count + 1
count = len(imgs)
nimg if verbose and nimg > 0:
print(f"Read {nimg} images, starting cellpose.")
= model.eval(
masks, flows, styles = diameter, channels = channels, flow_threshold = threshold,
imgs, diameter = do_3D, batch_size = batch_size)
do_3D
io.save_to_png(imgs, masks, flows, files)print(f"Moving cellpose outputs to the cellpose output directory.")
= list(output_files.keys())
output_filenames for f_name in output_filenames:
print(f"Moving {output_files[f_name][start_mask]} to {output_files[f_name][output_mask]}")
shutil.move(output_files[f_name][start_mask], output_files[f_name][output_mask])
shutil.move(output_files[f_name][start_txt], output_files[f_name][output_txt])else:
print("Returning the output files.")
return output_files
One possible change is to perform measurements on the sum of Z-stacks instead of a single slice. Thus we would sum the cells, create the ROIs using the single slice grayscale image, then measure the set of all combined.
def collapse_z(raw_dataset, cellpose_result, method = 'sum'):
""" Stack multiple z slices for each timepoint.
If I understand Jacques' explanation of the quantification methods
correctly, they sometimes (often?) perform better on the
z-integration of pixels at each timepoint. This function performs
that and sends the stacked slices to the output directory and adds
the filenames to the cellpose_result dictionary.
"""
= list(cellpose_result.keys())
cellpose_slices = 0
slice_number = []
collapsed_slices for slice_name in cellpose_slices:
= os.path.dirname(cellpose_result[slice_name]['output_txt'])
output_directory = os.path.dirname(output_directory)
collapsed_directory = f"{collapsed_directory}/collapsed"
collapsed_directory = True)
os.makedirs(collapsed_directory, exist_ok = Path(f"{collapsed_directory}/frame{slice_number}.tif").as_posix()
output_filename 'collapsed_file'] = output_filename
cellpose_result[slice_name][if (os.path.exists(output_filename)):
if verbose:
print(f"Skipping {output_filename}, it already exists.")
else:
= raw_dataset[:, :, :, :, slice_number]
larger_slice = ij.py.to_imageplus(larger_slice)
imp = ZProjector.run(imp, method)
z_projector_result ## z_projector_mask = ij.IJ.run(z_projector_result, "Convert to Mask", "method=Otsu background=Light")
= ij.py.from_java(z_projector_result)
z_collapsed_image = ij.py.to_dataset(z_collapsed_image)
z_collapsed_dataset = ij.io().save(z_collapsed_dataset, output_filename)
saved if verbose:
print(f"Saving image {output_filename}.")
= slice_number + 1
slice_number return cellpose_result
In Jacques notebook, it looks like he only extracts ROIs from one of the cellpose slices. I am assuming the goal is to extend this across all images?
There is an important caveat that I missed: imagej comes with a python2-based scripting language from which it appears some of his code is coming. As a result I should look carefully before using it, and pay close attention to the examples provided here for the most appropriate ways of interacting with the ROI manager etc:
https://github.com/imagej/pyimagej/blob/main/doc/examples/blob_detection_interactive.py
## The following is from a mix of a couple of implementations I found:
## https://pyimagej.readthedocs.io/en/latest/Classic-Segmentation.html
## an alternative method may be taken from:
## https://pyimagej.readthedocs.io/en/latest/Classic-Segmentation.html#segmentation-workflow-with-imagej2
## My goal is to pass the ROI regions to this function and create a similar df.
def slices_to_roi_measurements(cellpose_result, collapsed = False):
""" Read the text cellpose output files, generate ROIs, and measure.
I think there are better ways of accomplishing this task than
using ij.IJ.run(); but this seems to work... Upon completion,
this function should add a series of dataframes to the
cellpose_result dictionary which comprise the various metrics from
ImageJ's measurement function of the ROIs detected by cellpose.
"""
= cellpose_result
output_dict = list(cellpose_result.keys())
cellpose_slices = 0
slice_number for slice_name in cellpose_slices:
'slice_number'] = slice_number
output_dict[slice_name][= ''
input_tif if collapsed:
= cellpose_result[slice_name]['collapsed_file']
input_tif else:
= cellpose_result[slice_name]['input_file']
input_tif = ij.io().open(input_tif)
slice_dataset = ij.py.to_imageplus(slice_dataset)
slice_data = cellpose_result[slice_name]['output_txt']
input_txt = cellpose_result[slice_name]['output_mask']
input_mask if verbose:
print(f"Processing cellpose outline: {input_txt}")
print(f"Measuring: {input_tif}")
# convert Dataset to ImagePlus
= ij.py.to_imageplus(slice_data)
imp = ij.RoiManager.getRoiManager()
rm "Associated", "true")
rm.runCommand("show All with labels")
rm.runCommand(## The logic for this was taken from:
## https://stackoverflow.com/questions/73849418/is-there-any-way-to-switch-imagej-macro-code-to-python3-code
= open(input_txt, 'r')
txt_fh = f'Set Measurements...'
set_string = f'area mean min centroid median skewness kurtosis integrated stack redirect=None decimal=3'
measure_string
ij.IJ.run(set_string, measure_string)= defaultdict(list)
roi_stats for line in txt_fh:
= line.rstrip().split(",")
xy = [int(element) for element in xy if element not in '']
xy_coords = [int(element) for element in xy[::2] if element not in '']
x_coords = [int(element) for element in xy[1::2] if element not in '']
y_coords = JArray(JInt)(x_coords)
xcoords_jint = JArray(JInt)(y_coords)
ycoords_jint = scyjava.jimport('ij.gui.PolygonRoi')
polygon_roi_instance = scyjava.jimport('ij.gui.Roi')
roi_instance = polygon_roi_instance(xcoords_jint, ycoords_jint,
imported_polygon len(x_coords), int(roi_instance.POLYGON))
imp.setRoi(imported_polygon)
rm.addRoi(imported_polygon)'Measure', '')
ij.IJ.run(imp, 'Update')
rm.runCommand(= ij.ResultsTable.getResultsTable()
slice_result = ij.convert().convert(slice_result,
slice_table 'org.scijava.table.Table'))
scyjava.jimport(= ij.py.from_java(slice_table)
slice_measurements 'measurements'] = slice_measurements
output_dict[slice_name]['Clear Results')
ij.IJ.run(
txt_fh.close()
imp.setOverlay(ov)
imp.getProcessor().resetMinAndMax()= slice_number + 1
slice_number return output_dict
## Pull out of slices_to_roi_measurements() the parts which actually define
## a cellpose-detected cell with respect to time.
## This should be used in cooperation with functions which apply the
## resulting polygons to other images (to create the ROIs) along with
## functions that perform the final measurements. If properly
## implemented, this should allow one to mix and match the invocations
## to use them on any set of the raw or sliced data.
## def extract_cellpose(cellpose_result):
## The following is from a mix of a couple of implementations I found:
## https://pyimagej.readthedocs.io/en/latest/Classic-Segmentation.html
## an alternative method may be taken from:
## https://pyimagej.readthedocs.io/en/latest/Classic-Segmentation.html#segmentation-workflow-with-imagej2
## My goal is to pass the ROI regions to this function and create a similar df.
def slices_to_roi_measurements(cellpose_result, collapsed = False):
""" Read the text cellpose output files, generate ROIs, and measure.
I think there are better ways of accomplishing this task than
using ij.IJ.run(); but this seems to work... Upon completion,
this function should add a series of dataframes to the
cellpose_result dictionary which comprise the various metrics from
ImageJ's measurement function of the ROIs detected by cellpose.
"""
= cellpose_result
output_dict = list(cellpose_result.keys())
cellpose_slices = 0
slice_number for slice_name in cellpose_slices:
'slice_number'] = slice_number
output_dict[slice_name][= ''
input_tif if collapsed:
= cellpose_result[slice_name]['collapsed_file']
input_tif else:
= cellpose_result[slice_name]['input_file']
input_tif = ij.io().open(input_tif)
slice_dataset = ij.py.to_imageplus(slice_dataset)
slice_data = cellpose_result[slice_name]['output_txt']
input_txt = cellpose_result[slice_name]['output_mask']
input_mask if verbose:
print(f"Processing cellpose outline: {input_txt}")
print(f"Measuring: {input_tif}")
# convert Dataset to ImagePlus
= ij.py.to_imageplus(slice_data)
imp = ij.RoiManager.getRoiManager()
rm "Associated", "true")
rm.runCommand("show All with labels")
rm.runCommand(## The logic for this was taken from:
## https://stackoverflow.com/questions/73849418/is-there-any-way-to-switch-imagej-macro-code-to-python3-code
= open(input_txt, 'r')
txt_fh = f'Set Measurements...'
set_string = f'area mean min centroid median skewness kurtosis integrated stack redirect=None decimal=3'
measure_string
ij.IJ.run(set_string, measure_string)= defaultdict(list)
roi_stats for line in txt_fh:
= line.rstrip().split(",")
xy = [int(element) for element in xy if element not in '']
xy_coords = [int(element) for element in xy[::2] if element not in '']
x_coords = [int(element) for element in xy[1::2] if element not in '']
y_coords = JArray(JInt)(x_coords)
xcoords_jint = JArray(JInt)(y_coords)
ycoords_jint = scyjava.jimport('ij.gui.PolygonRoi')
polygon_roi_instance = scyjava.jimport('ij.gui.Roi')
roi_instance = polygon_roi_instance(xcoords_jint, ycoords_jint,
imported_polygon len(x_coords), int(roi_instance.POLYGON))
imp.setRoi(imported_polygon)
rm.addRoi(imported_polygon)'Measure', '')
ij.IJ.run(imp, 'Update')
rm.runCommand(= ij.ResultsTable.getResultsTable()
slice_result = ij.convert().convert(slice_result,
slice_table 'org.scijava.table.Table'))
scyjava.jimport(= ij.py.from_java(slice_table)
slice_measurements 'measurements'] = slice_measurements
output_dict[slice_name]['Clear Results')
ij.IJ.run(
txt_fh.close()
imp.setOverlay(ov)
imp.getProcessor().resetMinAndMax()= slice_number + 1
slice_number return output_dict
slices_to_roi_measurements() returns a dictionary with keys which are the filenames of each raw tif file. Each element of that dictionary is in turn a dictionary containing some information about the files along with a df of the measurements provided by imagej.
My little geopandas function assumes a single long df with some columns which tell it which timepoint. So lets make a quick function to give that here. OTOH it may be wiser/better to make some changes to slices_to_roi_measurements() so that it returns that format df; but since I am using this as a learning experience to get more comfortable with python data structures, I will not do it that way.
def convert_slices_to_pandas(slices):
""" Dump the cellpose_result slice data to a single df.
There is no good reason for me to store the data as a series of
dataframes within a dictionary except I want to get more
comfortable with python datastructures. Thus, this function
should be extraneous, but serves as a way to go from my hash to a
single df.
"""
= pandas.DataFrame()
concatenated = list(slices.keys())
slice_keys = 0
slice_counter for k in slice_keys:
= slice_counter + 1
slice_counter = slices[k]
current_slice if verbose:
print(f"The slice is {k}")
= current_slice['slice_number']
slice_number = current_slice['measurements']
slice_data 'Frame'] = slice_number
slice_data[if (slice_counter == 1):
= slice_data
concatenated else:
= pandas.concat([concatenated, slice_data])
concatenated ## This is a little silly, but I couldn't remember that the index attribute
## is the numeric rowname for a moment
## The reset_index() does what it says on the tine, and changes the 1:19, 1:20, etc
## of each individual time Frame to a single range of 1:2000
= concatenated.reset_index().index
concatenated.index return concatenated
def nearest_cells_over_time(df, max_dist = 10.0, max_prop = 0.7,
= 'X', y_column = 'Y', verbose = True):
x_column """Trace cells over time
If I understand Jacques' goals correctly, the tracing of cells
over time should be a reasonably tractable problem for the various
geo-statistics tools to handle; their whole purpose is to
calculate n-dimensional distances. So, let us pass my df to one
of them and see what happens!
Upon completion, we should get an array(dictionary? I forget) of
arrays where each primary key is the top-level cell ID. Each
internal array is the set of IDs from the geopandas dataframe,
which contains all of the measurements. Thus, we can easily
extract the data for individual cells and play with it.
"""
= geopandas.GeoDataFrame(
gdf
df,= geopandas.points_from_xy(df[x_column], df[y_column]))
geometry
= gdf.Frame.max()
final_time = []
pairwise_distances for start_time in range(1, final_time):
= start_time
i = i + 1
j = gdf.Frame == i
ti_idx = gdf.Frame == j
tj_idx if verbose:
print(f"Getting distances of dfs {i} and {j}.")
= gdf[ti_idx]
ti = gdf[tj_idx]
tj = ti.shape[0]
ti_rows = tj.shape[0]
tj_rows = geopandas.sjoin_nearest(ti, tj, distance_col = "pairwise_dist")
titj
pairwise_distances.append(titj)
= 0
id_counter ## Cell IDs pointing to a list of cells
= {}
traced ## Endpoints pointing to the cell IDs
= {}
ends for i in range(0, final_time - 1):
= pairwise_distances[i]
query = query.pairwise_dist <= max_dist
passed_idx = query.pairwise_dist > max_dist
failed_idx if (failed_idx.sum() > 0):
if verbose:
print(f"Skipped {failed_idx.sum()} elements in segment {i}.")
= query[passed_idx]
query
= query.Area_left / query.Area_right
prop_change = prop_change > 1.0
increased_idx = 1.0 / prop_change[increased_idx]
prop_change[increased_idx] = prop_change < max_prop
failed_idx = prop_change >= max_prop
passed_idx if (failed_idx.sum() > 0):
if verbose:
= (f"Skipped {failed_idx.sum()} elements in segment {i} ",
skip_string f"because the size changed too much.")
print(skip_string)
= query[passed_idx]
query
for row in query.itertuples():
= row.Index
start_cell = row.index_right
end_cell if start_cell in ends.keys():
= ends[start_cell]
cell_id = traced[cell_id]
current_value
current_value.append(end_cell)= current_value
traced[cell_id] = cell_id
ends[end_cell] else:
= id_counter + 1
id_counter = [start_cell, end_cell]
traced[id_counter] = id_counter
ends[end_cell] return traced
Note to self, Jacques’ new dataset uses wanted_z == 2, wanted_channel == 3.
= separate_slices(input_file, wanted_z = 2,
raw_dataset, saved_slices, slice_directory = 3) wanted_channel
## Starting to open the input file, this takes a moment.
## Opened input file, writing images to /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2
## This dataset has dimensions: X:2048 Y:2048 Z:11 Time:121
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_0.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_1.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_2.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_3.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_4.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_5.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_6.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_7.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_8.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_9.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_10.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_11.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_12.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_13.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_14.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_15.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_16.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_17.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_18.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_19.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_20.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_21.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_22.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_23.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_24.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_25.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_26.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_27.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_28.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_29.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_30.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_31.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_32.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_33.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_34.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_35.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_36.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_37.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_38.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_39.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_40.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_41.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_42.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_43.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_44.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_45.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_46.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_47.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_48.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_49.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_50.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_51.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_52.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_53.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_54.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_55.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_56.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_57.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_58.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_59.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_60.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_61.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_62.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_63.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_64.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_65.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_66.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_67.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_68.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_69.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_70.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_71.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_72.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_73.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_74.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_75.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_76.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_77.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_78.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_79.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_80.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_81.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_82.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_83.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_84.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_85.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_86.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_87.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_88.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_89.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_90.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_91.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_92.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_93.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_94.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_95.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_96.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_97.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_98.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_99.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_100.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_101.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_102.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_103.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_104.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_105.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_106.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_107.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_108.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_109.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_110.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_111.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_112.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_113.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_114.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_115.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_116.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_117.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_118.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_119.tif, it already exists.
## Skipping /lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_120.tif, it already exists.
##
## [java.lang.Enum.toString] ZeissCZIReader initializing /lab/scratch/atb/imaging/mtb_2023/test_data/Experiment-1568.czi
## [java.lang.Enum.toString] [WARN] Unknown DetectorType value 'GaAsP-PMT' will be stored as "Other"
## [java.lang.Enum.toString] [WARN] Unknown DetectorType value 'GaAsP-PMT' will be stored as "Other"
## [java.lang.Enum.toString] [WARN] Unknown DetectorType value 'GaAsP-PMT' will be stored as "Other"
## [java.lang.Enum.toString] [WARN] Unknown DetectorType value 'Multialkali-PMT' will be stored as "Other"
= invoke_cellpose(slice_directory, 'models/CP_20220523_104016') cellpose_result
## Error: KeyError: '/lab/scratch/atb/imaging/mtb_2023/test_data/outputs/Experiment-1568_z2/frame_120_cp_masks.png'
= collapse_z(raw_dataset, cellpose_result) cellpose_result
## Error: NameError: name 'cellpose_result' is not defined
= slices_to_roi_measurements(cellpose_result, collapsed = True) slice_measurements
## Error: NameError: name 'cellpose_result' is not defined
= convert_slices_to_pandas(slice_measurements) concatenated
## Error: NameError: name 'slice_measurements' is not defined
= nearest_cells_over_time(concatenated, max_dist = 10.0,
nearest = 'X', y_column = 'Y') x_column
## Error: NameError: name 'concatenated' is not defined
As a final step, we should be able to extract and play with the information from one or more groups of cells.
= 129
cell_id = nearest[cell_id] cell_idx
## Error: NameError: name 'nearest' is not defined
= concatenated.loc[cell_idx] cell_data
## Error: NameError: name 'concatenated' is not defined
len(cell_data)
## Error: NameError: name 'cell_data' is not defined
= cell_data.reset_index() cell_data
## Error: NameError: name 'cell_data' is not defined
= plt.scatter(cell_data['X'], cell_data['Y']) scatter
## Error: NameError: name 'cell_data' is not defined
= cell_data.index.max() final_row
## Error: NameError: name 'cell_data' is not defined
for start_time in range(0, final_row - 1):
= cell_data.index == start_time
ti_idx = cell_data.index == start_time + 1
tj_idx = cell_data[ti_idx].X
p1x = cell_data[tj_idx].X
p2x = cell_data[ti_idx].Y
p1y = cell_data[tj_idx].Y
p2y = [p1x, p2x]
x_points = [p1y, p2y]
y_points plt.plot(x_points, y_points)
## Error: NameError: name 'final_row' is not defined
= cell_data.index == final_row - 1 finalm1_idx
## Error: NameError: name 'cell_data' is not defined
= cell_data.index == final_row final_idx
## Error: NameError: name 'cell_data' is not defined
= cell_data[finalm1_idx].X finalm1_x
## Error: NameError: name 'cell_data' is not defined
= cell_data[final_idx].X final_x
## Error: NameError: name 'cell_data' is not defined
= cell_data[finalm1_idx].Y finalm1_y
## Error: NameError: name 'cell_data' is not defined
= cell_data[final_idx].Y final_y
## Error: NameError: name 'cell_data' is not defined
= [finalm1_x, final_x] x_points
## Error: NameError: name 'finalm1_x' is not defined
= [finalm1_y, final_y] y_points
## Error: NameError: name 'finalm1_y' is not defined
plt.plot(x_points, y_points)
## Error: NameError: name 'x_points' is not defined
plt.show()
= cell_data.Area) seaborn.violinplot(data
## Error: NameError: name 'cell_data' is not defined
plt.show()
::pander(sessionInfo()) pander
R version 4.2.0 (2022-04-22)
Platform: x86_64-pc-linux-gnu (64-bit)
locale: LC_CTYPE=en_US.UTF-8, LC_NUMERIC=C, LC_TIME=en_US.UTF-8, LC_COLLATE=en_US.UTF-8, LC_MONETARY=en_US.UTF-8, LC_MESSAGES=en_US.UTF-8, LC_PAPER=en_US.UTF-8, LC_NAME=en_US.UTF-8, LC_ADDRESS=en_US.UTF-8, LC_TELEPHONE=en_US.UTF-8, LC_MEASUREMENT=en_US.UTF-8 and LC_IDENTIFICATION=en_US.UTF-8
attached base packages: stats4, stats, graphics, grDevices, utils, datasets, methods and base
other attached packages: reticulate(v.1.28), spatstat.geom(v.3.2-1), spatstat.data(v.3.0-1), hpgltools(v.1.0), testthat(v.3.1.8), SummarizedExperiment(v.1.28.0), GenomicRanges(v.1.50.2), GenomeInfoDb(v.1.34.9), IRanges(v.2.32.0), S4Vectors(v.0.36.2), MatrixGenerics(v.1.10.0), matrixStats(v.0.63.0), Biobase(v.2.58.0) and BiocGenerics(v.0.44.0)
loaded via a namespace (and not attached): rappdirs(v.0.3.3), rtracklayer(v.1.58.0), tidyr(v.1.3.0), ggplot2(v.3.4.2), clusterGeneration(v.1.3.7), bit64(v.4.0.5), knitr(v.1.42), DelayedArray(v.0.24.0), data.table(v.1.14.8), KEGGREST(v.1.38.0), RCurl(v.1.98-1.12), doParallel(v.1.0.17), generics(v.0.1.3), GenomicFeatures(v.1.50.4), callr(v.3.7.3), RhpcBLASctl(v.0.23-42), cowplot(v.1.1.1), usethis(v.2.1.6), RSQLite(v.2.3.1), shadowtext(v.0.1.2), bit(v.4.0.5), enrichplot(v.1.18.4), xml2(v.1.3.4), httpuv(v.1.6.10), viridis(v.0.6.3), xfun(v.0.39), hms(v.1.1.3), jquerylib(v.0.1.4), evaluate(v.0.21), promises(v.1.2.0.1), fansi(v.1.0.4), restfulr(v.0.0.15), progress(v.1.2.2), caTools(v.1.18.2), dbplyr(v.2.3.2), igraph(v.1.4.2), DBI(v.1.1.3), htmlwidgets(v.1.6.2), purrr(v.1.0.1), ellipsis(v.0.3.2), dplyr(v.1.1.2), backports(v.1.4.1), annotate(v.1.76.0), aod(v.1.3.2), biomaRt(v.2.54.1), deldir(v.1.0-6), vctrs(v.0.6.2), remotes(v.2.4.2), here(v.1.0.1), cachem(v.1.0.8), withr(v.2.5.0), ggforce(v.0.4.1), HDO.db(v.0.99.1), GenomicAlignments(v.1.34.1), treeio(v.1.22.0), prettyunits(v.1.1.1), DOSE(v.3.24.2), ape(v.5.7-1), lazyeval(v.0.2.2), crayon(v.1.5.2), genefilter(v.1.80.3), edgeR(v.3.40.2), pkgconfig(v.2.0.3), tweenr(v.2.0.2), nlme(v.3.1-162), pkgload(v.1.3.2), devtools(v.2.4.5), rlang(v.1.1.1), lifecycle(v.1.0.3), miniUI(v.0.1.1.1), downloader(v.0.4), filelock(v.1.0.2), BiocFileCache(v.2.6.1), rprojroot(v.2.0.3), polyclip(v.1.10-4), graph(v.1.76.0), Matrix(v.1.5-4), aplot(v.0.1.10), boot(v.1.3-28.1), processx(v.3.8.1), png(v.0.1-8), viridisLite(v.0.4.2), rjson(v.0.2.21), bitops(v.1.0-7), gson(v.0.1.0), KernSmooth(v.2.23-21), pander(v.0.6.5), Biostrings(v.2.66.0), blob(v.1.2.4), stringr(v.1.5.0), qvalue(v.2.30.0), remaCor(v.0.0.11), gridGraphics(v.0.5-1), scales(v.1.2.1), memoise(v.2.0.1), GSEABase(v.1.60.0), magrittr(v.2.0.3), plyr(v.1.8.8), gplots(v.3.1.3), zlibbioc(v.1.44.0), compiler(v.4.2.0), scatterpie(v.0.1.9), BiocIO(v.1.8.0), RColorBrewer(v.1.1-3), lme4(v.1.1-33), Rsamtools(v.2.14.0), cli(v.3.6.1), XVector(v.0.38.0), urlchecker(v.1.0.1), patchwork(v.1.1.2), ps(v.1.7.5), MASS(v.7.3-60), mgcv(v.1.8-42), tidyselect(v.1.2.0), stringi(v.1.7.12), highr(v.0.10), yaml(v.2.3.7), GOSemSim(v.2.24.0), locfit(v.1.5-9.7), ggrepel(v.0.9.3), grid(v.4.2.0), sass(v.0.4.6), fastmatch(v.1.1-3), tools(v.4.2.0), parallel(v.4.2.0), rstudioapi(v.0.14), foreach(v.1.5.2), gridExtra(v.2.3), farver(v.2.1.1), ggraph(v.2.1.0), digest(v.0.6.31), shiny(v.1.7.4), Rcpp(v.1.0.10), broom(v.1.0.4), later(v.1.3.1), httr(v.1.4.6), AnnotationDbi(v.1.60.2), Rdpack(v.2.4), colorspace(v.2.1-0), brio(v.1.1.3), XML(v.3.99-0.14), fs(v.1.6.2), splines(v.4.2.0), yulab.utils(v.0.0.6), PROPER(v.1.30.0), tidytree(v.0.4.2), spatstat.utils(v.3.0-3), graphlayouts(v.1.0.0), ggplotify(v.0.1.0), plotly(v.4.10.1), sessioninfo(v.1.2.2), xtable(v.1.8-4), jsonlite(v.1.8.4), nloptr(v.2.0.3), ggtree(v.3.6.2), tidygraph(v.1.2.3), ggfun(v.0.0.9), R6(v.2.5.1), RUnit(v.0.4.32), profvis(v.0.3.8), pillar(v.1.9.0), htmltools(v.0.5.5), mime(v.0.12), glue(v.1.6.2), fastmap(v.1.1.1), minqa(v.1.2.5), clusterProfiler(v.4.6.2), BiocParallel(v.1.32.6), codetools(v.0.2-19), fgsea(v.1.24.0), pkgbuild(v.1.4.0), mvtnorm(v.1.1-3), utf8(v.1.2.3), lattice(v.0.21-8), bslib(v.0.4.2), tibble(v.3.2.1), sva(v.3.46.0), pbkrtest(v.0.5.2), curl(v.5.0.0), gtools(v.3.9.4), GO.db(v.3.16.0), survival(v.3.5-5), limma(v.3.54.2), rmarkdown(v.2.21), desc(v.1.4.2), munsell(v.0.5.0), GenomeInfoDbData(v.1.2.9), iterators(v.1.0.14), variancePartition(v.1.28.9), reshape2(v.1.4.4), gtable(v.0.3.3) and rbibutils(v.2.2.13)
message(paste0("This is hpgltools commit: ", get_git_commit()))
## If you wish to reproduce this exact build of hpgltools, invoke the following:
## > git clone http://github.com/abelew/hpgltools.git
## > git reset 7d590afc508c9049ffc923388a6c0c7ea122a937
## This is hpgltools commit: Thu May 25 14:32:24 2023 -0400: 7d590afc508c9049ffc923388a6c0c7ea122a937
<- paste0(gsub(pattern="\\.Rmd", replace="", x=rmd_file), "-v", ver, ".rda.xz")
this_save message(paste0("Saving to ", this_save))
## Saving to index_functions_big-v20230530.rda.xz
<- sm(saveme(filename=this_save)) tmp