Skip to content
Snippets Groups Projects
s88711's avatar
s88711 authored
76daa57e
History

Visual-3D-Print-Error-Detection

This repository is for a Course tittled "Learning from Images" at BHT. Our Project is to detect if a 3D-Print has failed with the help of Cameras,CNN and Domain shifting.

Preface

This Idea ist not new and also has been done before but we and my colleague want to implent it ourself. We are trying to use an allready labeld Dataset from Kaggle where the Camera is directly mounted to the Nozzle which is maybe not Ideal for detecting failes of the whole Print but it could be used for first layer print fail detection. Further more what could be also done with this setup is tuneing in the Printing paramtes live (this has also be doen befoere)

Camera Mount

The Camera is how i allready mentioned directly mounted to the print-head via a 3D-Printed mount https://www.printables.com/de/model/17993-prusa-mk3s-55mm-nozzle-camera-mount :

prusa_i3_mk3s_endocamera_mount_17993

This mount may be not the perfect mount for the camera postion and image we generate trough that. There are many alternatives on Printables and if none of them match our Requierments we could design one ourself

We tried another setup and mount because we thought it would match the angel and generell type of pictures of the Kaggle dataset more https://www.printables.com/de/model/48381-prusa-mk3s-nozzle-camera-mount-no-support-required:

endoholder

In the end we went back to the original mount becasue its image and angle turend out to be the best.Nonetheless we used the enhanced version of it with a screw which secures agianst rotation of the camera which asures a stable image.

This mount was prone to breaking tho which was annoying tho no issue as we could print it jsut again. The mount aslo could have been more improved maybe for long time use but as the time of writing this we didnt see it as a priorty

Camera & Lights

The Camnera itself is a endoscope type camera whit some leds in the lens. The camera was not very expensiv which si the mainr eason we chossed it and also it size and capibilty to be mounted close to the nozzle.

The Problem occured as we vied the images. There Qualitiy was ok in bright enviroments but lacked in dark enviroments .The light at the end of the endoscope reflected on the lens and created artefacts furthermore the resolution was also very low which was recognizable in the dark.

The lighting in whole was of the very delicated matter.As i mentioned before there are tinx led lights at the end of the lens. Those are dimmebel which helps but i cann only use them with 25 to 50 % of the capacity because after that they make lens artefacts. I also have lights mouted in the enclosre of the printer. One might think that would help but they just trhow shadows on the print bed and the nozlle which makes the dettection of erros even harder.

In the end i did some surggerz on the caera and delicatly removed the screen taht was infront of hte lens which helped with the lens flare problem. tho now the lens was not protected from impacts or anything else.

3D-Printing

The Printing is done on one of our own upgraded Prusa MK3S+. As a Filament we gonna use White PLA from FilaFarm because it gives the best contrast to the green printbed and the gold like nozzle. We are using a standard profile with 0.2mm layerhight and a print speed of 100% (Standard print speed : 50-60 mm/s) and 0.4 Nozzle. Those Settings are the most widlyused settigns in the 3D-Printign Community.

In later stages we tried to use green filament becuase of the reflectiv properties of the white filament which maybe cused problems. Suprigsily there was also used very little white filament in the datasets and more green for example. Grenn didn't seem to improve the recognition. We are gonna try blue Filament now of which there was abondens in the dataset.

3D-Printing

The Printing is done on one of our own upgraded Prusa MK3S+. As a Filament we gonna use White PLA from FilaFarm because it gives the best contrast to the green printbed and the gold like nozzle. We are using a standard profile with 0.2mm layerhight and a print speed of 100% (Standard print speed : 50-60 mm/s) and 0.4 Nozzle. Those Settings are the most widlyused settigns in the 3D-Printing Community.

In later stages we tried to use green filament becuase of the reflectiv properties of the white filament which maybe cused problems. Suprigsily there was also used very little white filament in the datasets and more green for example. Grenn didn't seem to improve the recognition. We also tried blue which also didnt improve results and also red. No coulour change helped.

Data Sets

How allready mentioend we are using an Kaggle dataset which has around 25k images which should be enough for this type of work https://www.kaggle.com/datasets/gauravduttakiit/early-detection-of-3d-printing-issues. Another dataset which we maybe gonna use is the cambridge one. This one has arround 1.25 mio images in its dataset which are also classifed into more labels then the Kaggel one they have more diffrent describign aspects liek temperaturea nd speed to it. These details make it more suited for the live adjusting paramter programm.

After further investigation it appeard that the Kaggle dataset is not focused on a general fail like bed addhesion or stringing or jsut awful print starts. It is focused on underextrusion.

651004c30d85231bd021dff7_Extrusion-1600x679

This is not in particualr a problem but under extrusion is problem or fale typ that doenst occur unless the user did something majorly wrong. Under extrusion only happens wehn paremters liek feeding rate or clogging come into affect but Problems liek bed adhesion are a way more on the day Problem for a User. It is not as easy to recreate the Problem as well. For under extrussion u have to force baly try to trun the extrsuin parmeters down for bed addhesion you just touch the bed and thats enough.

Our next steps were to collect our own data form prints and try some domainshifting. The strategy for that is that we produce 1 fail and 2 good prints of every colour (red,green,blue and pink). This should be enough because we catch evry frame which amounts to a significant amount of data.

Code

The Code concept works as follwing: For the 3D error detection we us CNNs combined with diffrent datasets

We are using 2+1 diffrent dataset. One is the unchaged datset from kaagle. The second one is the kaggle dataset miex trough domainshifting with our own recorded data. Then we also have one where we used transofrmation to simulate a bad camera image.

We use 2 fiels for testing the datasets. One is for using a datasest filled with images TEST_MODEL_with_DATA_SET.py the other one is for using a video or stream TEST_MODEL_with_VIDEO.py.

We also have multipile files for training the datasets. TRAIN_MODEL.py functions as a tempalte for implemteing your own model. Then we have TRAIN_MODEL_IMP_R50.py which is used with the datset which uses the image transforamiton for simulating a bad camera. Furthermore we heve TRAIN_MODEL_MIX_R18.py which is used with the domainshifted dataset and also of course TRAIN_MODEL_NORMAL_R18.py which is used with the unedited Kaagle Dataset.

The final file ist ERROR_PRINT_DETECTION.py which is the main file of hte project where we can use any of the earleir mentioned datasets to detect an error in the 3D Print. IF 80 Percent of the last 200 frames is a classifeid as a fail the printer gets stopped.

Octoprint integration

The printer is controlled over a rasberry pie 3b+ with octoprint installed on it https://octoprint.org/. this makes conrtolling the pretty easy. On problem is tho that recording the footage for the neural net. the way we do it that i stream the cmaera image to the octoprint browser control iamge and from there i record it with OBS on my PC. the reason for why we dont record it directly to the rasbery pie ist that the fiel sizes are to big for the small sd card also to interactwith the code and the fottage its eazier to that directly on the pc.

The control of the printer was planned to be done over octoprint too. Octopirnt offers API acces which sounded great for our use case. the intergration was more difficult then expected. Fist attempt was over api second attempt was over xpath. Both didnt seem to work because we didnt have the nessacery certifictas or the api key didnt work. In the end we resorted to using very simple code. Streaming our PC screen and using an hostadress for the video input. Then a clicker jsut hits the button in the octoprint user interface