


Summary:The automatic filling machine performs image processing on the workpiece through vision, obtains the coordinate information of the random position of the workpiece, and then completes the process operation. This intelligent working mode is increasingly used in industrial production.After calibration using the vision camera and manipulator, absolute coordinates are established to accurately position the workpiece, thereby realizing the capping work of the manipulator.The automatic filling machine includes a manipulator, vision, servo, sensor and PLC. It can loosen the plastic cover and iron cover at the mouth of the empty oil barrel, tighten the iron cover at the mouth of the full oil barrel, and complete the filling process. After the filling system is connected, fully automatic filling can be realized.The system has been put into use at the Yihai Grain and Oil site, running stably and meeting production needs.
0 Introduction
With the continuous development of industrial automation technology, manipulators in industrial production has been rapid development, the degree of intelligence is also increasing, and its application in modern industrial production is becoming more and more widespread. For the filling production line of 220L such large oil drums on the market, in the capping and screwing this process section, most factories are still operated manually by manual labor. Due to the labor cost and management cost is getting higher and higher, many factories hope to realize the automation upgrade. The difficulty lies in how to equip the equipment with the same recognition function as the human eye, and at the same time, the necessary tools need to be designed to replace the manual operation. As empty oil drums travel on the conveyor, the position of the drum mouth may be at any angle, here we need to use the vision camera instead of the human eye function to accurately identify the position of the drum mouth. Through the robot and vision and electrical control system data interaction, the final realization of intelligent robot operation, reducing the factory's labor costs and management costs, and improve production efficiency.
1 Process flow
The weighing and filling machine includes the empty barrel cap pulling area B1, the oil filling area B2, the cap screwing area B3, the visual identification of the empty barrel area S1, the visual identification of the full barrel area S2, as well as the manipulator and fixture.
First of all, the automatic filling machine executes initialization to judge whether to enter the loose cap sub-program or screw cap sub-program according to the state of the screw cap clamping jaws. When the loose lid area B1 weighing module detects the empty oil drum in place, the manipulator stops waiting for vision S1 to take pictures position, vision S1 takes pictures for the 1st time, recognizes the angular position of the plastic lid of the oil drum with respect to the datum line, and sends the angular parameter to the manipulator, which executes the action to rotate to the datum 0 degree position after adsorbing the empty oil drum through the suction cup on the fixture, then the manipulator returns to waiting for the vision S1 to take pictures position. Vision S1 takes pictures for the second time, identifies the position of the white plastic cover, and compares the identified angle with the reference position, and the deviation is judged normal within the permitted value, and at the same time sends the X, coordinates of the white plastic cover to the manipulator, which obtains the coordinate position of the white plastic cover, performs the action of pulling out the white plastic cover, and puts the white plastic cover into the plastic cover bin, and then the manipulator returns to the position of waiting for vision S1 to take pictures again. The robot then returns to wait for vision S1 to take a picture. Vision S1 takes a picture for the 3rd time and judges that the white plastic cover has been pulled out. Vision S1 takes photos for the 4th time, recognizes the angle R of the 2 convex heads inside the iron cover and the coordinate position of the iron cover, the servo motor rotates the servo motor according to the angle R, realizes that the clamping jaws for screwing the iron cover and the convex head of the iron cover make a 90°crossing, and the manipulator operates in accordance with the X coordinate to the position of loosening the iron cover, the servo motor rotates counterclockwise, screws out the iron cover, and the manipulator goes back to waiting for the location where vision S1 is taking photos. Vision S1 takes a picture for the 5th time to judge whether the iron cover is screwed out. The caps are screwed out, then the empty oil drums in the loose cap area are allowed to enter the filling area.
After the B2 weighing module in the filling zone detects the empty drums in place, the weighing and filling machine starts to fill, and after filling, the oil drums are allowed to run to the screwing cap zone, and the oil drums in the screwing cap zone are detected in place by the B3 load cell. The robot runs to the waiting vision S2 photo position, B3 weighing sensor detects the oil drum in place in the screwing area, the vision S2 takes a photo for the first time, sends the coordinates of the barrel mouth to the robot, the robot executes the screwing program, the servo motor on the fixture rotates to drive the screwing jaws to rotate clockwise, the servo motor stops the action when it achieves the torque required for screwing and the robot rises to return to the waiting vision S2 photo position, and screwing is completed. The screwing action is completed. Vision S2 the second time to take pictures, judge the iron cover has been tightened screwed, then allow the screwing area of the oil drum out, the robot back to wait for vision S1 to take pictures of the position, ready for the next empty oil drum pulling the cover process.
2 machine vision system principle
Machine vision system is the use of machines to replace the human eye to achieve a variety of measurements and judgments. The principle is to establish an image template through the camera imaging, and then establish a functional relationship between the image coordinates and the coordinates of the manipulator (two-dimensional). In the work, the camera can identify the position of the two-dimensional direction of the workpiece within the field of view and calculate the coordinates, so that the manipulator can move to the precise position required by the process, with the relevant equipment to simulate the human visual behavior. When the position of the workpiece is not fixed, the workpiece model is diversified and the color is different, the vision system can identify the workpiece.
Model, color, and position the coordinates of the object to guide the robot to place the identified workpiece in the designated area or drive the corresponding fixture through the robot to complete the process operation. The vision software establishes a link between the image coordinate system and the robot coordinate system through the method of multi-point calibration, and guides the robot to complete the process operation on the positioned target.
2.1 Vision system design description
Filling machine in order to improve the stability of the vision system as well as the high-speed requirements of the entire process, here will be 2 vision are fixed on the bracket, S1 at the bar light source, S2 at the circular light source. For the vision in the empty drum area, a 5 megapixel wide angle camera is used. In the full oil drum area, since the field of view of the camera is only the interval of the mouth of the drum, the precision does not need to be too high, so the selection of 300,000 pixels of ordinary cameras can meet the requirements. In order to make the position of the robot to grasp and put the lid accurately, the function of the vision system is used to calculate the position deviation value, and the X, coordinate position is calculated through the function, and the data is processed by the PLC and then transmitted to the robot through the PROFINET field bus, and the robot completes the work of loosening and unscrewing the lid with high precision according to the X, coordinate value.
2.2 Vision system task
Collect the two-dimensional coordinates (X, Y) of the center point of the lid and mouth of the oil drum and the angle R of the current oil drum relative to the image in the vision template through the vision system, send it to the PLC for processing, then establish the connection between the image coordinate system and the robot coordinate system, and finally guide the robot to complete the process control work.
2.3 Camera calibration
Before doing the vision task, it is necessary to do a good job of camera calibration. This is due to the fact that the default output coordinate value of the camera is the pixel value captured by the camera, not the size of the actual workpiece position, so it is necessary to calibrate the mapping between the physical coordinates and the pixel value. To convert the data captured by the camera to position data under the robot coordinate system, the corresponding conversion relationship between the camera coordinate system and the robot coordinate system must be established This process is realized by camera calibration. After setting the calibration, the pixel value of the measurement result can be converted to actual size and output. The accuracy of the camera calibration result and the stability of the algorithm are directly related to the positioning accuracy of the workpiece. Therefore, a good camera calibration is a prerequisite for subsequent work.
2.3.1 Preparation before calibration
Before calibration, you need to adjust the height of the camera and the light source, the camera light source exposure and the lens focal length, and fixed, installed in the manipulator fixture calibration needle, print the calibration grid paper used for calibration, and fixed on the surface of the oil drum horizontal paving.
2.3.2 Camera calibration method
Here the nine-point calibration method is used, 9 groups of points are taken in the calibration grid paper by the calibration pin head, and each group of points will record a value under the robot coordinate system and a value under the image coordinate system. With these nine sets of coordinates, functions are called in the In-Sight software to convert pixels to millimeters, to angle the image coordinate system to the robot coordinates, and to correct lens aberrations. After the nine-point calibration, the image coordinate system and the manipulator coordinate system have been parallel, only the origin is not coincident, that is, the manipulator goes a few millimeters, the image inside will go a few millimeters.
2.3.3 Calibration process
Based on the process requirements of the paper, need to calibrate 2 cameras and manipulator respectively, calibration steps are specifically divided into 3 steps.
1) Teach a user coordinate system of the manipulator with the manipulator three-point method. The origin of the user coordinate system according to the actual situation, generally choose to measure the filling machine production line, a fixed position reference point, the reference point should be convenient for the camera coordinate transformation calibration.
2) Create an empty oil drum calibration template, the need for calibration of the empty oil drum through the automatic operation into the workplace, in the oil drum lid above the flat.
Sheet of grid paper for calibration, real-time view of the screen in the In-Sight software, to confirm that it is in the field of view of the camera S1, and then move the manipulator, so that the manipulator fixture mounted on the calibration needle to move to the first marking point, to obtain the image coordinates and manipulator coordinates, and then move the manipulator, and then in turn, to obtain the coordinates of the manipulator of the other eight positions and image coordinates, to complete the calibration.
3) Create a full oil drum calibration template, when the empty oil drum is fixed in the cap pulling working position, lay a piece of grid paper for calibration on top of the oil drum mouth, view the screen in real time in the In-Sight software to confirm that it is in the field of view of the camera S2, and then move the manipulator, so that the calibration needle mounted on the manipulator fixture is moved to the 1st marking point to obtain the image coordinates and the coordinates of the manipulator, then move the manipulator, to complete calibration, and then move the manipulator. Then move the manipulator to obtain the manipulator coordinates and image coordinates of the other 8 positions in turn to complete the calibration.
After the calibration is completed, the camera output coordinate system and robot coordinate system are unified, 10 random positions of the oil drum are read through 10 times of vision system photos, and then the robot is shaken so that the marked workpiece stops at these 10 positions, the data information is recorded, the average error value is calculated, and the PLC algorithm corrects the deviation of the coordinate system by this error value.
Filling system by the S7-1500PLC, ABB robot, Cognex vision system and servo V9OPN, through the PROFINET field bus to form a closed-loop control system, which S7-1500 comes with a PROFINET interface, configured in the hardware configuration of the network and the IP address, and add a good V90PN and vision slave, which the configuration of the ABB slave needs to be first Copy the GSD file of PROFINET from the hard disk of ABB robot which is consistent with the actual hardware and system version, add the ABB slave station in the hardware configuration, assign a good name and P address, configure the relevant configuration and variables of PROFINET communication in the control panel of the ABB robot, and then collect the vision data through the PLC. operation and precise control of the process, and finally realize the recognition function by the vision system instead of the human eye, and the manipulator and related fixture components instead of the human hand to carry out the operation of loosening and unscrewing the cap. The human-machine interface can monitor the steps of each photo taken by the vision system. The working area of the equipment is protected by a protective net enclosure, and the door for the operator to exit has a double circuit door detection switch and is connected to the safety circuit of the manipulator, realizing the isolation of man and machine and ensuring the safety of personnel during the operation of the equipment.
4 Conclusion
The paper describes the robot vision system in use in the use of the process of the system is running well. Cognex vision system used in the system, In-Sight software spreadsheet powerful, simple design, easy maintenance, the whole system is stable and efficient, to meet the needs of production, improve the production efficiency of enterprises, saving labor costs and management costs. With the development of robot and vision technology, vision-based robot control system will be used more in intelligent industrial upgrading.