Sparkmaker hacks

A while ago I backed a Kickstarter project by WoW called the Sparkmaker. The premise was an affordable resin 3D printer that could serve as a low entry point into resin printing. Against the odds and with plenty of the usual delays and drama the company delivered what they promised and shipped thousands of these little machines to their backers (and me).

About a year later they even came out with an upgraded version, the Sparkmaker FHD, that increased the print resolution from 480p to 1080p, a major and needed jump in fidelity.

It does what it was made to do, create small, highly detailed prints by exposing UV sensitive resin one layer at a time.

Of course, by aiming for the absolute minimum price point, compromises had to expected and that's fine by me. It just means that there is room for improvement here and there. It's a bit of a work-in-progress.

Here is a collection of my 'hacks' to try and improve my printing experience with this super-low-cost printer.

The firmware is closed-source (although I am pretty sure it's Marlin-based) but the file format is a variant of gcode.

Easy to modify to some extent (however, not sticking to a certain order of commands is a sure way to crash the printer).

Vat extension (Sparkmaker Original)

Just a little hack to increase the amount of resin available to a print without having to get up in the middle of the night to check the resin level in the vat.

Progress indicator (Sparkmaker Original)

Just a little thingamabob to keep track of the printing progress by counting the number of exposures that have elapsed. (The sparkmaker original does not have a display or Bluetooth connection to report it's progress)

Github repo

Sparkmaker (Original) antialiasing addon (discontinued)

The Sparkmaker Original, in addition to having a really low resolution display, only understands black and white input images. So, natively, there is no way to feed it anti-aliased source images to smooth out the large pixels.

However, hacking the gcode it is possible to inject more images and to control the intensity and time for each exposure. This opens the door to emulate anti-aliasing.

After running many tests with less than exciting results and running into several limitations of the firmware (multiple exposures without changing the layer number will crash the printer eventually, creating a new layer each time will circumvent this however the firmware is hardwired to stop at 6000 layers [used to be 3000 until I pointed it out], the minimum possible lamp-on time is 0.42s, etc.).

Eventually I decided the disappointing results did not really justify looking into it further.

There are many opinions about anti-aliasing out there. In my opinion it only works in very specific situations and under the best of circumstances. If it is more work then ticking a checkbox in your slicer it isn't worth the hassle.

Your mileage may vary.

(Anti-aliasing on the right, no AA on the left)

Sparkmaker FHD backlight issues (under investigation)

Sparkmaker FHD is the new and improved version of this printer and in addition to a higher resolution screen it is also equipped with a more powerful UV backlight.

Unfortunately, this backlight produces a very uneven exposure across the screen with hotspots directly under the LEDs as well as areas where the light cones overlap.

This results in regions that are over/under exposed in a regular pattern. Normally, when printing small or highly detailed objects this is not very prominent, however, large smooth surfaces will show obvious circular artifacts.

That's the bad news. The good new is, FHD also comes with a new file format that incorporates PNG gray scale images in the gcode. No more black-and-white only signals.

So in theory, if we can measure the intensity of the UV light that hits the screen at each pixel, we could use the LCD screen to compensate this overexposure and normalize the light.

I haven't quite managed to fully eliminate the problem yet, but here's my process, maybe you can find a flaw.

1) Mapping out how much UV hits the screen in different areas by taking a series of photos.

I covered the screen in tracing paper to capture the light on the screen rather then look through it.

First I soaked it in my rinsing alcohol solution and let it dry. This covered it in resin which glows under UV light,

with the hope that more of the UV gets shifted into a spectrum that my camera can see (uv filter and all).

(I ended up capturing the screen through a mirror. In the setup above the z-mount was always in the way.)

I then ran a program that projected white and a checkerboard pattern in succession and shot the screen from multiple angles. (The tracing paper doesn't diffuse the light enough, i.e. the observed illumination is not entirely view angle independent)

These pairs of images can then be used to re-project the perspective images back onto the screen area.

I used openCV for this:

cv.findChessboardCorners(img, pattern_size )

h, status = cv.findHomography(corners, pnts_dst)

Then I added all angles (10 altogether) together and normalized them.

The mapping isn't perfect but not too bad, including the black outline which is the tape the screen is held in place with.

Interesting to note, there are values in here that are less than half of the hot spots. Quite an uneven illumination.

2) Next I measured the LCD response to grey levels. The idea is to account for non-linearity of both the LCD as well as the camera. So I shot 11 images of grey levels from 0% (black) to 100% (white), averaged them down a little then scanned for the maximum pixel values and got this curve.

A somewhat expected gamma curve (except maybe the strange outlier in the middle, gotta double-check that one).

3) Creating an inverse lookup table from 2) and using it on the illumination map 1) I calculate an attenuation map that, in theory should normalize the light down to an arbitrarily selected value of 0.5 (a value that is close to the darkest found pixels. Can't make dark pixels brighter, only attenuate bight pixels down, In retrospect I may have to go lower. The darkest pixels are actually closer to 40%). I also blurred it a little to get rid of the fiber structure or grain of the tracing paper. The fact that the paper's grain shows up quite prominently is a good indication that the homography managed to align the images quite accurately.

4) A tiny python script then reads a sliced print.fhd file, decodes the included images and multiplies them with the attenuation map from 3).

Current Results: Better but not perfect. See pictures below. I managed to reduce the patterns in size and sharpness somewhat but they are still visible. I may have to normalize the map down further, to maybe 40% but I am not sure it's enough to suppress the patterns much more. I'd expect it to be a subtle change.

I think I am missing something obvious here.

My best guess is that my camera is not capturing enough of the UV spectrum to get a good representation of the effect. I had hoped that the visible part of the light was at least somewhat related to the amount of uv radiation and maybe it is but that means I need to increase the attenuation by a large unknown factor. It seems there is still some non-linear relationship between the observed light and it's effect on the resin that I am not fully taking into account.

More tests are in order.

It also means that we will loose most of the brunt of the much more powerful back light by having to attenuate it so much. Meh.

To be continued.

Let me know if you have any thoughts on this.

(left: unprocessed print, 6s exposure, right: compensated print [40%], 12s exposure)

Want to have a look at the code? https://github.com/hooyah/sparkyFHD

Any good ideas? Leave me a note or discuss on my facebook https://www.facebook.com/fhuable