Within the myrobotlab GUI, you can then use opencv as a "plugin" and you can test the various processing features of opencv, prototype a pipeline etc. - it's easier than diving in via the python bindings as first steps.
Another thing is: What's a binary image?
For anybody who is not so familiar with computer vision, that's probably not a simple concept -how exactly is a computer seing the world. Well, one of the filters used is breaking down a visual view of the world into a "binary image", wikipedia has a nice explanation of the concept.
The next probably unfamiliar thing: What are image moments?
You are not the first to ask. Wikipedia has some explanation: "an image moment is a certain particular weighted average (moment) of the image pixels' intensities, or a function of such moments, usually chosen to have some attractive property or interpretation". Mh... A more apporachable explanation can be found here. usually you'll extract moments through thresholding, filtering, generating binary images, blog detection or edge detection - all of these are related concepts. In opencv some docs are here and elsewhere.