Home / Project Blogs /Computers Making Sense of Nissl-Stained Rat Brain Tissue

Dump

Only FX

There are 18.839885711669922 positive samples out of 41, or 0.4595094076017054, in train

There are 43.979061126708984 positive samples out of 74, or 0.5943116368474187, in val

There are 2.100994110107422 positive samples out of 299, or 0.007026736154205424, in test

All White Matter

There are 818.2217254638672 positive samples out of 1842, or 0.44420289113130684, in train

There are 809.5520668029785 positive samples out of 1949, or 0.41536791524011213, in val

There are 28.365169525146484 positive samples out of 299, or 0.09486678770952002, in test

Fornix w some White Matter

There are 21.38662338256836 positive samples out of 98, or 0.21823085084253427, in train, w total of 47 non-zero samples

There are 51.4186897277832 positive samples out of 131, or 0.3925090818914748, in val, w total of 92 non-zero samples

There are 2.100994110107422 positive samples out of 299, or 0.007026736154205424, in test, w total of 6 non-zero samples

IoUs

White Matter

IoU: 0.49

Fornix

IoU: 0.37

Triplet Loss

IoU: 0.41

Goal

Here we provide baselines for semantic segmentation of the Fornix, a brain region apparent in coronal sections of Nissl-stained rat brain tissue. Specifically, we develop methods to generate numerical representations of Nissl-stained tissue. Then, we group similar representations and assess how well the groups estimate homogenous tissue regions delineated by human experts.

Intro

In order to automatically delineate brain regions in digital computers, we first need to represent brain tissue as digits. A common approach for systems neuroscience laboratories involves storing high-resolution photographs of Nissl-stained brain tissue as digital files. The resulting images have enough pixels to resolve cellular structure; for images with a resolution of 1µm/px, for example, each pixel value, or digit, represents a physical space of 1µm^2.

However, at high resolutions, cellular structure is not encoded by single pixels, but rather by groups of pixels, or image patches. Assuming a cell measures ~20 µm x 20 µm and our images have a resolution of 1µm/px, we would need a patch of 60pixels^2 to encode the cellular structure of a single cell and its immediate neighborhood. We could then use the patch to find similar patches and ultimately find homogeneous regions in an image.

Here we investigate several methods to derive features from patches of 60pixels^2. We then test each method in its ability to generate features we can use to delineate a brain region called the Fornix. We assess the performance of each feature extraction method by comparing the computer-generated delineations with a human delineation of the Fornix.

Deriving Features

Here I talk about using data, mean and standard deviation, and gabor convolutions as features, and then I use the self-supervised method, triplet loss, to introduce convolutions and learning and data.

Data

Mean & STD

Hand-Engineered Convolutions

Learnable Convolutionals

Results

Here I compare human annotation to threshold predictions for all similar features. Use IOU

Discussion

Here I talk about the extensive paramater tweaking either in the threshold parameter or convolutional choice. I also talk about how using the self supervised solves the choice of threshold parameter and the convolutional weights for feature extraction at the expense of having to establish the training protocol. Then I use this to frame the supervised approach which establishes the training protocol as well as the threshold parameter and convolutional weights.