Robotics & Automation

poor robots party.jpg

Industrial robotics and automation are growing at breakneck speed. Well, that's what the reports say. Funny thing is, if you go back 10 years and read analysts’ reports about robotics and automation, those reports will also say that robotics and automation are growing at breakneck speed. Go back to the reports of the early 90s and guess what? Breakneck speed.

Frankly, robotics and automation have been around for a long time.

Robots and automation in industry date back over seven decades. The first forms of manufacturing machine automation based on electronic controls appeared in the late 1940s in the work of John D. Parsons. In the 1950s, the first large prototypes of Numerical Control (NC) machines started to appear, mostly driven by the needs of airplane manufacturers to make interchangeable parts to tight tolerances. Throughout the 1950s, the Air Force supported the development of Numeric Control, Computer Aided Drafting (CAD), and Robotics. In the 1960s, private industry pressed forward with GM’s development of DAC-1, the first commercial CAD system, which GM used internally to take design direct to manufacturing. Outside of GM, Control Data Corporation’s Digigraphics became the first commercial system to tie Computer Aided Design to Computer Numeric Control (CAD/CNC). Lockheed used the system to make production parts for the C-5 Galaxy in the 1960s.

In the period between the early 1960s and the mid 1980s, automation continued to grow rapidly in three different directions:  Computer Aided Design (CAD), Computer Numerical Control (NCN), and Systems Automation. Material Handling Equipment (MHE) systems evolved in this era as the manufacturers of conveyors and mobile equipment started to embrace the new technology. In 1953, Barrett Electronics introduced the first Automated Guided Vehicle (AGV), a simple tow tractor that followed a signal broadcast by a wire embedded in the floor of a manufacturing facility. Such wire guidance systems are still in use today, and you will find them directing some older AGVs and Very Narrow Aisle turret trucks. Conveyors also evolved, with the development of different mechanical methods of diversion, and sortation conveyors, a conga line of diverts controlled with indexers and line-speed controls.  

Before the introduction of barcodes as the predominate form of automated identification, sorters depended on a human operator to look at a label and key in a number into the conveyor controls before the box traveled down the conga line of diverts. With barcodes, an overhead scanner could read the barcode, sending the ID data to the conveyor controls for diversion.

When I went back to college in 1984 to get my Industrial Management degree, robotics and automation were still growing at breakneck speed. The big three automakers (and the Japanese and Germans) used industrial robots in welding and painting applications. Industrial manufacturers continued to buy and install CNC Machines to replace older manual operations. The introduction of microprocessors helped accelerate the implementation of the CNC machines, replacing paper-punch tape with disk drives and diskettes. In 1985, I worked with three other students to convert a manual horizontal milling machine into a CNC-controlled machine using stepper motors, gear boxes, and a Commodore 64 computer. That prototype became the base for a poor man’s CNC system. Using that same process, I developed a three-axis robotic arm from scratch, using the same base code we'd used for the CNC machine we built the year before.

Programmed vs. Autonomous

The history lesson above is important to understand because there is a new construct in automation that has begun to take hold in the past 10 years.

In the early years of automation and robotics, the code that controlled the hardware consisted of a programmed set of highly defined instructions. The code would look like this (in pseudo code):

  • ⦁    Clamp: On
  • ⦁    Spindle Location: Z = +150
  • ⦁    Spindle Motor: On
  • ⦁    Spindle Speed: 22,000
  • ⦁    Table Location: X = 2300, Y = -1200, Speed 200
  • ⦁    Spindle Location: Z = +075
  • ⦁    Table Location: X = 2100, Y = -1600, Speed 025
  • ⦁    Spindle Location: Z = +150
  • ⦁    Table Location: X = 2300, Y = -1200, Speed 200
  • ⦁    Spindle Location: Z = +075
  • ⦁    Table Location: X = 2300, Y = -1100, Speed 025

If these lines of code were for a horizontal mill, we machined two groves in a price of material. This same logic applied to the way the early robots functioned, and this was how many of the various CNC machine tools operated.

Before NC and CNC, a highly trained mill operator might have had a set of drawings and perhaps a set of instructions. The operator looked at the drawings, considered the material, the cutting tool, the strength of the machine, and the kinds of cuts needed before deciding what steps to take. In some cases, an engineer could make these decisions for the operator, but the operator could choose to ignore those instructions. The human operator was autonomous from the engineer; the operator set the spindle speed, clamped the blank, moved the table handles, lowered the cutter spindle, and then moved the table in the proper direction. No matter how much skill the operator had, or how much care he took, there was always part-to-part variance and a loss of tolerance.

The early NC machines required the knowledge and experience of the seasoned operator to be tediously translated into detailed, step-by-step program code that told the mechanism what to do. The example above could be thousands of lines. While many people thought the benefit of the NC systems was their replacement of high-cost labor, the cost of the engineering required to program the machines could be much more than the cost of the blue-collar labor. The real benefit was the reduction of part-to-part variance and the high tolerances that the NC machines could create.

While the marriage of CAD with Computer Aided Manufacturing (CAM) could remove many of the more tedious steps in the process, the construct between the creative mind and the machine remained the same—detailed, minute steps based on pre-planned and pre-determined situations. Even when applied to material handling systems—specifically AGVs and conveyors—the system lost flexibility to adapt to changing situations because of the rigid construct of fixed programming to fixed expectations.

Thinking Machines

Three factors are changing the control construct in robotics and automation.

⦁    More Machines
The International Federation for Robotics estimates there are 1.2 – 1.5 million industrial robots operating today. In 2012, companies acquired over 160,000 new robots. Even assuming that some of these were replacements, the growth is in the double digits.

⦁    Lower Costs
With more machines, the cost per unit for the actual hardware drops. But the real revolution comes from Moore’s Law—the inevitability of constant increases in computing power and software development, concomitant with rapidly reducing costs. The combination of these factors lowers investment costs, making the systems more cost feasible.

⦁    Greater Capability
More computing and control power allows the machines to become smarter. With the application of lasers, inertial guidance, R/F based GPS, and motion detection, AGVs do not need wires in the floors for guidance. The machines can now choose the shortest paths to their objectives and navigate around congestion points in a facility.

The new controls construct is autonomous control—the robots have sufficient intelligence to navigate their environments, making independent decisions about what the next task is, where to go, when, and how fast. Coupled in a broader network, these automated systems can balance the needs of local optima with the global needs of the system.

Google-Autonomous car.jpg Google’s driverless car is just one example of the new construct. Issues remain to be addressed before the driverless concept rolls out to a broader world of highways, but the application is in action in warehouses today. Look at any Kiva application and you see an example of the mixture of local autonomy and systems coordination.

A few years ago, I spoke to a group of skeptical 3PL operators about the notion of a robot unloading containers. The skeptics said it could not happen because of the random nature of container loading, and that robots required predictable patterns. In March 2013, I watched a video produced by Industrial Perception, Inc. of a robot unloading a container—a load of random-sized boxes stacked in random patterns. In this video, an Industrial Perception, Inc. robotic arm fitted with a hinged hand lined with vacuum-powered suction cups unloads a mockup container. Confronted with a stack of boxes, the arm surveys the scene, picks the best box to move, decides where to grab it, and then tosses the box to IPI's Product Manager, Erin Rapacki.

Here is another video from the IPI lab, of the robot unloading a floor-loaded trailer. Mounted on a mobile platform that navigates into the trailer, the perception technology finds the wall of cases (or boxes), locates individual cases, picks a case, and places it on the outfeed conveyor. In this configuration, the truck unloader hits single pick rates of 600–700 cases per hour.

Where is Industrial Perception today? In December 2013 and January 2014, Google quietly acquired seven technology companies in an effort to create a new generation of robots. Google bought Schaft, Meka, and Redwood Robotics, companies focused on making humanoid robots. For robotic vision, Google bought Bot & Dolly, a maker of robotic camera systems, and Industrial Perception, the start-up that developed computer vision systems and robot arms for loading and unloading trucks.

With the Big G buying out these developers, when do you think we will see a higher level of physical automation in the warehouse? We must be patient and wait for Andy Rubin, the former Android head and a serious robotics guy, to figure out the next steps.

Search All Topics

Articles in This Series

 Call Us! 877-674-7495