During production, there are known problem areas that could result in the creation of a bad part. Checking for potential issues during production allows manufacturers to scrap or rework unacceptable parts at the beginning of a run and correct issues before too many parts are produced. This saves a significant amount of time and expense.
There are many types of devices and systems that can check for production errors. One example is a machine vision system that captures images of parts in various stages of production, analyzes them and elicits a predetermined action. Examples include detecting a part that is poorly positioned, a missing part or component, or if a step was done out of order.
One useful way to check on the production process is to use error proofing. Error proofing determines that a production process happens according to plan. Fanuc introduced a new AI Error Proofing function in its iRVision detection system designed for parts inspection that uses machine learning technology.
AI Error Proofing is designed to check for two distinct situations, and example images of both situations need to be used to train the tool. For example, if the tool is used for checking the presence or absence of a welded nut, images of the part with the nut and without the nut need to be used to train the error proofing tool. (AI Error Proofing is not designed for detection of flaws such as scratches or dents that occur in random positions on a part.)
Fanuc introduced iRVision in 2006, and each year continues to add new features and functionality that make it easier and more powerful. iRVision is Fanuc’s fully robot integrated visual detection system that enables robots to see in order to manage production settings in a faster, smarter and more reliable way. This increases the overall production flexibility and efficiency in the workplace. The application solution can be implemented without complicated programming or expert knowledge. This results in a high operational efficiency of the overall process.
The error proofing tool is built into iRVision and allows artificial intelligence without any additional hardware. Like every iRVision product, AI Error Proofing does not require an additional processor. All processing happens within Fanuc’s robot controller. The same processor that controls the robot and its motion performs the vision processing, including the AI Error Proofing function. Because iRVision does not use a PC or smart camera, it does not negatively impact the reliability of a workcell.
What makes AI Error Proofing artificial intelligence? By providing multiple examples of good and bad parts, the error proofing tool is able differentiate between the two during production runs. During setup, the operator can present multiple examples of parts and classify them into two categories – good and bad. Once the operator classifies the images, the error proofing tool automatically classifies the parts during production runs.
Figure 1 shows an example of AI Error Proofing finding a welded nut on a shock mount bracket. Examples of the welded nut and the missing nut were used in the tool’s learning process. In the example, class 1 was trained with the nut and class 2 was trained without the nut. Figure 1 shows the welded nut in class 1, highlighted in cyan.
Figure 2 provides an example where the operator differentiates between examples. The operator classifies a plastic applicator with a lid as class 1 and without a lid as class 2. All class 1 examples are cyan and class 2 examples are orange.
Figure 3 shows the results of the classifications from Figure 2. Multiple objects may be classified in the same image. Figure 3’s example shows two different applicators. The one with a lid is highlighted in cyan and the one without a lid is highlighted in orange. In this case, iRVision’s GPM Locator Tool identified the location and orientation of the applicator. Combining the GPM Locator Tool’s pattern matching ability with AI Error Proofing allows parts to be found and classified at the same time in the same image. The combination of these tools allows the robot to pick plastic applicators from a conveyor and place the ones with a lid into the filling machine, and the ones without a lid into a reject bin.
Because artificial intelligence is a learning process, an operator may easily add images to the library. During production startup, parts that are incorrectly categorized can be added to the learned data as a properly categorized part and improve the learned model. In the current scenario, AI Error Proofing outputs examples as either class 1 or 2. If the example does not fall into either class, it will output as undetermined. If the class is undetermined, then it can be added manually to improve the learned model.
Along with the class, the confidence is also output. The higher the confidence, the surer the error proofing tool believes that the example fits into one of the two classes. Based on a user-defined threshold, the application can be set up to flag inspections with a low confidence and allow the operator to manually add the example to the learned data to improve the learned model.
Proper and consistent lighting is always important with machine vision applications. With AI Error Proofing, it is less of a concern. By providing examples of the good and bad parts over a range of lighting, the error proofing tool can learn the difference between the examples and properly differentiate between the good and bad parts.
A closer look
Like all iRVision products, AI Error Proofing supports robot-mounted and fix-mounted cameras. A robot-mounted camera allows the robot to inspect parts from multiple angles and locations. In many cases, a camera can be added to the tooling to add the error proofing functionality with minimal impact on the existing process. In other instances, it may be more cost-effective to add a new robot to position the camera in different locations around the part.
The camera does not have to be robot mounted. It can be set up in the workcell to error proof one particular area of the part. Because iRVision can support up to 27 cameras, any combination of robot- or fixed-mounted cameras can be used to error proof all the required areas of the part.
The iRVision cameras utilize a fixed focal length lens. This means that the field of view is a factor of the selected lens and the distance the camera is from the viewing area. By selecting the appropriate lens and standoff distance, the correct field of view required for the error proofing process can be achieved. Typically, the larger the area to be error proofed within the field of view, the more reliably AI Error Proofing can classify it.
There is a misconception in machine vision that higher resolution imaging is a requirement. In most robotic automation cases, high resolution is simply not necessary. AI Error Proofing is designed to provide high performance with a standard resolution camera.
Companies that use AI Error Proofing do not require an experienced vision engineer to set up the process. As long as the eye can detect the differences between parts, then the error proofing tool is also able to differentiate between parts. AI Error Proofing can be used in instances where even an experienced vision engineer would struggle to do the job with conventional machine vision tools.
Even without AI Error Proofing, an experienced vision engineer may be able to set up the error proofing vision process for many applications using iRVision’s suite of tools. However, it often takes a significant amount of time to set up and ensure reliability for some of the more complicated processes. Using the error proofing tool to learn to differentiate between good and bad parts eliminates the need to have an expert vision engineer. It also reduces the complexity of the vision setup, saving time and money during integration and startup.
In summary, adding error proofing can improve a manufacturing process by catching manufacturing errors early, which will improve production efficiency. AI Error Proofing makes it easy to add error proofing to any Fanuc robot application, providing customers a variety of advantages including reducing lighting and camera resolution requirements, reducing the amount of engineering hours needed to perfect the system and minimizing costs compared to traditional methods.