Home / Project Blogs /Computers Making Sense of Nissl-Stained Rat Brain Tissue

Goal

Here we provide baselines for semantic segmentation of the Fornix, a brain region apparent in coronal sections of Nissl-stained rat brain tissue. Specifically, we develop methods to generate numerical representations of Nissl-stained tissue. Then, we group similar representations and assess how well the groups estimate homogenous tissue regions delineated by human experts.

Intro

In order to automatically delineate brain regions in digital computers, we first need to represent brain tissue as digits. A common approach for systems neuroscience laboratories involves storing high-resolution photographs of Nissl-stained brain tissue as digital files. The resulting images have enough pixels to resolve cellular structure; for images with a resolution of 1µm/px, for example, each pixel value, or digit, represents a physical space of 1µm^2.

However, at high resolutions, cellular structure is not encoded by single pixels, but rather by groups of pixels, or image patches. Assuming a cell measures ~20 µm x 20 µm and our images have a resolution of 1µm/px, we would need a patch of 60pixels^2 to encode the cellular structure of a single cell and its immediate neighborhood. We could then use the patch to find similar patches and ultimately find homogeneous regions in an image.

Here we investigate several methods to derive features from patches of 60pixels^2. We then test each method in its ability to generate features we can use to delineate a brain region called the Fornix. We assess the performance of each feature extraction method by comparing the computer-generated delineations with a human delineation of the Fornix.

Deriving Features

Here I talk about using data, mean and standard deviation, and gabor convolutions as features, and then I use the self-supervised method, triplet loss, to introduce convolutions and learning and data.

Data

Mean & STD

Hand-Engineered Convolutions

Learnable Convolutionals

Results

Here I compare human annotation to threshold predictions for all similar features. Use IOU

Discussion

Here I talk about the extensive paramater tweaking either in the threshold parameter or convolutional choice. I also talk about how using the self supervised solves the choice of threshold parameter and the convolutional weights for feature extraction at the expense of having to establish the training protocol. Then I use this to frame the supervised approach which establishes the training protocol as well as the threshold parameter and convolutional weights.