Visual-3D-Print-Error-Detection
This repository is for a Course titled "Learning from Images" at BHT. Our Project is to detect if a 3D-Print has failed with the help of Cameras, CNN and Domain shifting.
Preface
This Idea is not new and also has been done before, but we and my colleague want to implement it ourselves. We are trying to use an already labeled Dataset from Kaggle where the Camera is directly mounted to the Nozzle, which is maybe not ideal for detecting fails of the whole Print, but it could be used for first layer print fail detection. Furthermore, what could be also done with this setup is tuning in the Printing parameters live (this has also been done before)
Camera Mount
The Camera is, how we already mentioned, directly mounted to the print-head via a 3D-Printed mount https://www.printables.com/de/model/17993-prusa-mk3s-55mm-nozzle-camera-mount :

This mount may be not the perfect mount for the camera position and image we generate trough that. There are many alternatives on Printables and if none of them match our Requirements we could design one ourselves.
We tried another setup and mount because we thought it would match the angel and general type of pictures of the Kaggle dataset more https://www.printables.com/de/model/48381-prusa-mk3s-nozzle-camera-mount-no-support-required:
In the end, we went back to the original mount because its image and angle turned out to be the best. Nonetheless, we used the enhanced version of it with a screw which secures against rotation of the camera, which assures a stable image.
This mount was prone to breaking tho, which was annoying tho no issue as we could print it just again. The mount also could have been more improved maybe for long time use but at the time of writing this we didn't see it as a priority
Camera & Lights
The Camera itself is an endoscope type camera with some LEDs in the lens. The camera was not very expensive which is the main reason we chosed it, also its size and capability to be mounted close to the nozzle was a deciding argument. Furthermore, the wide amount of nozzle mounts for endoscope on Printables assured us in our choice
The Problem occurred as we vied the images. Their Quality was ok in bright environments but lacked in dark environments. The light at the end of the endoscope reflected on the lens and created artifacts, furthermore the resolution was also very low which was recognizable in the dark.
The lighting in whole was of the very delicate matter. As we mentioned before, there are tiny LED lights at the end of the lens. Those are dimmable, which helps, but we can only use them with 25 to 50 % of the capacity because after that they make lens artifacts. We also have lights mounted in the enclosure of the printer. One might think that would help, but they just throw shadows on the print bed and the nozzle, which makes the detection of errors even harder.
In the end, we did some surgery on the camera and delicately removed the screen that was in front of the lens, which helped with the lens flare problem. Tho now the lens was not protected from impacts or anything else.
3D-Printing
The Printing is done on one of our own upgraded Prusa MK3S+. In the beginning, we used White PLA Filament from FilaFarm because it gives the best contrast to the green print bed and the gold like nozzle. We are using a standard profile with 0.2mm layerhight and a print speed of 100% (Standard print speed : 50-60 mm/s) and 0.4 Nozzle. Those Settings are the most wildly used settings in the 3D-Printign Community.
In later stages, we tried to use green filament because of the reflective properties of the white filament, which maybe caused problems. Surprisingly, there was also used very little white filament in the datasets and more green for example. Dark and light Green didn't seem to improve the recognition. We also tried blue, pink and red which also didn't improve results. No color change delivered significant result changes.
Data Sets
How already mentioned we are using a Kaggle dataset which has around 25k images which should be enough for this type of work https://www.kaggle.com/datasets/gauravduttakiit/early-detection-of-3d-printing-issues. Another dataset which we considered using is the Cambridge one. This one has around 1.25 mio images in its dataset which are also classified into more labels than the Kaggel one, they have more different describing aspects like temperature and speed to it. These details make it more suited for the live adjusting parameter program.
After further investigation, it appeared that the Kaggle dataset is not focused on a general fail like bed adhesion or stringing, or just awful print starts. It is focused on under extrusion.
This is not in particular a problem, but under extrusion is a problem or fail type that doesn't occur unless the user did something majorly wrong. Under extrusion only happens when parameters like feeding rate or clogging come into effect, but Problems like bed adhesion are a way more on the day Problem for a User. It is not as easy to recreate the Problem as well. For under extrusion u have to manually choose bad parameters to turn the extrusion down, for bed adhesion you just touch the bed and that's enough.
Our next steps were to collect our own data form prints and try some domainshifting. The strategy for that is that we produce 1 fail and 2 good prints of every color (red, green, blue and pink). This should be enough because we catch every frame, which amounts to a significant amount of data.
Code
The Code concept works as following: For the 3D error detection, we us CNNs combined with different datasets
We are using 2+1 different dataset. One is the unchanged dataset from Kaggle. The second one is the Kaggle dataset mix trough domain shifting with our own recorded data. Then we also have one where we used transformation to simulate a bad camera image.
We use 2 files for testing the datasets. One is for using a dataset filled with images TEST_MODEL_with_DATA_SET.py the other one is for using a video or stream TEST_MODEL_with_VIDEO.py.
We also have multiple files for training the datasets. TRAIN_MODEL.py functions as a template for implementing your own model. Then we have TRAIN_MODEL_IMP_R50.py which is used with the dataset which uses the image transformation for simulating a bad camera. Furthermore, we have TRAIN_MODEL_MIX_R18.py which is used with the domain shifted dataset and also of course TRAIN_MODEL_NORMAL_R18.py which is used with the unedited Kaggle Dataset.
The final file is ERROR_PRINT_DETECTION.py which is the main file of the project where we can use any of the earlier mentioned datasets to detect an error in the 3D Print. IF 80 Percent of the last 200 frames is a classified as a fail, the printer gets stopped.
Octoprint integration
The printer is controlled over a raspberry pie 3b+ with Octoprint installed on it, https://octoprint.org/. This makes controlling the pretty easy. One problem is tho, that recording the footage. The way we do it that we stream the camera image to the Octoprint browser control image and from there we record it with RECORDER_STREAMER.py. The reason for why we don't record it directly to the raspberry pie is that the file sizes are too big for the small SD card also the interaction with the code and the footage is easier to do directly on the pc.
The control of the printer was planned to be done over Octoprint too. Octoprint offers API access, which sounded great for our use case. The integration was more difficult than expected. First attempt was over API, second attempt was over XPath. Both didn't seem to work because we didn't have the nessacery certificates or the API key didn't work. In the end, we resorted to using very simple code. Streaming our PC screen and using a host adress for the video input RECORDER_STREAMER.py. Then a clicker just hits the button in the octoprint user interface