The latest version of our Arrow On Point podcast is out! In this episode, Kevin Schoonover, director of engineering services, Adam Catbagan, manger virtualization and networking technical services, talk with William Pickard, applications engineer with the Arrow SAM car project.
Be sure to subscribe so you won’t miss an episode!
In this Podcast, we will discuss:
- What was the goal of the Arrow SAM car project?
- What technologies are being used in the Arrow SAM car and what are the implications for the IT community and IT organizations?
- What data is being gathered and how is it being used?
- Will this technology impact everyday consumers?
Man 1: The SAM project, or the semi-autonomous car was about modifying the 2014 C7 Corvette Stingray, integrating advanced electronics with it along with the human to machine interface, such as a qualified quadriplegic can drive the car. The debut was in the Indy 500.
Female Announcer: A former Indy car driver who is now a quadriplegic is back on the track thanks to some amazing technology.
Male Announcer: Here’s how it works. When Schmidt moves his head, an array of infrared cameras around the driver’s seat detect motion by reflecting light off markers on his hat. These cameras can track minute head movements down to four thousandths of an inch. That’s about the thickness of a single sheet of paper. This information is then relayed to a computer in the trunk at about 170,000 miles per second, nearly the speed of light.
This computer interprets Schmidt’s movement and sends that information to a second computer that uses advanced algorithms to output the data back to the car. Side to side head movement activates the steering servo motor, a gear system that controls the steering wheel. A quick head tilt backwards engages the gas servo motor, which accelerates the car. And to brake, depending on how quickly he needs to slow down, Schmidt bites down on a sensor with between 60 and 80 pounds of force. And in fact, from the time Schmidt moves until the time the car responds takes as little as a hundredth of a second. That’s as fast as a lightning strike.
Man 2: We are trying to replicate this car as close to a real driving experience as possible. We’re not trying to make him feel anything different. We want the exact same fluid movement.
Male Announcer: During testing, Schmidt has already reached speeds of more than 100 miles per hour, which means he can cover a car length in about a tenth of a second. At any speed, safety is the highest priority of course, which is why the car is equipped with a unique GPS safety system. This advanced tracking system can pin point the car’s location down to five inches, and if the car goes off course, a computer system will take over the controls to avoid danger.
Man 1: The future of this, truthfully, is only limited by people’s imagination, but we believe that this demonstration of advanced technology to enable a quadriplegic to drive a car really opens the door to many more ideas around the community, be inspirational for the disabled community to live a better future, and many aspects of this technology can be deployed in either automotive industrial type of application, in home health care, and other aspects of the human life.
Kevin: Hello everybody, Kevin Schoonover. We’re back again with another episode of Arrow ECS On Point Podcast from the engineering team here at Arrow, covering different technology topics. Today we are going to discuss an Arrow project known as the SAM car, and some of the impacts around that and how it relates to big data. Joining me is Adam.
Adam: Hey Kevin, how you doing? Good to see you again. Excited to be on our second webcast. Just as a reminder, my role within Arrow is I’m the senior systems engineering manager for the Arrow ECS ISG group, focused on data center infrastructure.
Kevin: And also joining us is William. William, go ahead and introduce yourself.
William: Hi, I’m William Pickard. I’m with the engineering solutions center here in Denver, part of North American Components, and I am one of the engineers on the SAM car.
Kevin: Excellent. So the SAM car is a very interesting topic. A few years ago, Arrow set off to do some things that seemed impossible to do, and we’ll hopefully be able to roll some of the sports coverage of the SAM car into this. William, can you give us a little background on what the SAM car project was all about and how that came to fruition?
William: Sure. It really all started with Sam Schmidt. Sam is an Indy car race team owner, he also runs Indy Lights and a couple other race series, and I believe Mike Long, our CEO, got to know him through various channels and Sam is also a former Indy car driver. And he was injured in a crash about 15 years ago and was paralyzed from essentially the collarbone down. So he can shrug his shoulders and he can move his head, but he’s been wheelchair bound for 15 years.
And I believe that some time around June of 2013, we sent a team out to Sam’s home in Las Vegas, and we pitched him the idea that we wanted to build him a race car that he could drive again. And at the time he didn’t really take too much convincing. I believe his exact response was, “Well, if you build it, I’ll drive it,” and at that time we started off on this long journey that has been the SAM car project. Actually, I take that back. It’s actually been an incredibly short development period, when you really think about it.
From first conception in June of 2013, what we now term SAM 1.0 was the version of the car designed to run at Indianapolis Motor Speedway, and that they wanted to run before the 2014 Indy 500 in May. So from first conception in June to when the car first ran in May, they developed the entire system, and in fact they only took delivery of the car, which is a C7 2014 Corvette Stingray, they only took delivery of the car I believe in December of 2013.
So the entire build took place in less than six months. I then came onto the team for what was termed SAM 2.0. So the goal there was to be able to navigate a road course, and I should mention that we were successful at Indianapolis Motor Speedway. Sam was able to hit 107 miles an hour on the super speedway oval at that event. So we said, “Okay, we can do that. We can hit top speed on a super speedway oval, now let’s really push the control. Can we do a road course? Can we do a really tight, winding road course? Lot of hard turns, things like that?”
So we came in, and we basically had to redesign a lot of the sensing system in the car to make it around the track, and the event we chose to show this off the first time was the Long Beach Grand Prix in April of this year, 2015. And the Long Beach course, called the Streets of Long Beach, is a street course. It’s made out of city streets, it’s not a smooth, well done race track. It’s city streets that’s converted over.
Lot of barriers everywhere, and turn 11 is a 180 degree turn that’s very, very tight. It’s like doing a U-turn in the middle of a city street, almost. And the Indy cars themselves have trouble actually navigating that turn, so we had a big job on our hands, and again we evolved the system and before the Long Beach Grand Prix we were able to…SAM went around the course just fine and we were very successful.
And now the next thing we’re looking for is we’re continuing on. We want to do a mix of road course, we’re also hoping to hit some new high speeds at the Sonoma event coming up at the end of the month, which will be the last event of the Indy car season. So, that’s where we are right now.
Kevin: Pretty good. No, that’s a great background and amazingly we’re able to put that together in that sort of time frame. Through the project and perhaps as we’re heading into your role launching in with SAM 2.0, obviously there were obstacles that were hit into. Sometimes those are things that maybe you thought about, but what kind of things were unplanned obstacles that you had to deal with along the way here?
William: Well on SAM 2.0, I’m one of the sensor experts on the car, so I helped design the whole sensing, and I don’t know if it was unexpected, but the biggest challenge we’ve had with the entire system is that there are a number of technologies that we considered and that were considered for the car that are used for things like motion tracking and tracking the position of a user’s head and things like that, all of which worked quite well in the lab.
And if you or I were just sitting here, even playing a racing simulator, and we have built racing simulators, we have designed it as kind of a game controller before, the big difficulty with the car is that the automotive environment is very noisy. You have a lot of vibration, you have sunlight intrusion, the current system runs off of an IR camera interface, so we use IR cameras to track the position of the head and obviously when you have the bright sunshine outside coming through the windows, that can confuse an IR camera.
The other major thing is that the car itself is what we term a non-inertial reference frame. So while you’re moving in the car, you’re moving your head around relative to the car to steer it. The car itself is also moving, and the reality is, is the motion of the vehicle that you are in shows up as noise as well in the system.
So a lot of what we have to design around is the fact that while you’re steering the car, the car is moving and that is the whole point. And so a lot of what we have to do is cancel out a lot of the motion of the vehicle there, and find ways to make the system more robust and immune to vibration and noise and things like that. And especially in the automotive environment, we have concerns like temperature and a lot of the same concerns that you would go through and design in any kind of automotive system.
Kevin: Got it. Like you say, things work well in the lab sometimes but it isn’t until you get them out in the real world and start running into the dynamics that a person can run into out there that you realize what those issues are and problems are. Any idea, from our computer products into the house, it’s all about data and data turning into information. Any idea what kind of, or amounts of information you guys are gathering from the car? There’s the systems I think, of how you’re controlling and interacting, but any background on what kind of data capture functions you’re actually doing along the way?
William: Yeah, and it’s a little bit of a difficult question to answer simply because the car keeps evolving. It’s evolved significantly from 1.0 to 2.0. Certainly with our main camera system that we’re using now, depending on how you want to classify it, the cameras themselves are taking in a massive amount of data. That gets analyzed and pared down internally in the camera software to basically put out head position and things like that.
We also track gas and brake position, a lot of other metrics in the car, speed. We can also track, just off the vehicle’s CAN bus, all of the standard instrument cluster information. We have on the vehicle very high accuracy GPS, differential GPS. It’s accurate down to I believe less than 20 centimeters, updating 100 times a second. When the car ran at Indy, we had also biometric sensing going on. We were partnered with the Air Force for that, I believe, so both the driver and co-driver were wearing biometric armbands. We will also be doing that hopefully at Sonoma, that’s the plan so we can get biometric data from the car.
And, yeah, so I don’t have an exact number on what the exact volume of data we collect is, but we’re always looking to use the car as a test bench and a showcase for sensing technologies and activity technologies, so we’re always looking to incorporate how we can pull more useful data from the car.
Kevin: Right. No, and that’s kind of what I was hitting on, is we realize that there’s computers in our cars gathering an update on information, but especially in this case, there’s a lot more control and a lot more function and a lot more data that can be gathered up from that point of view. And hadn’t even thought about the biometric aspect being able to monitor the driver, and in this case the co-driver to see what pertains, and then you start running into correlations of what biometric functions are peaking, or depending on what’s going on with the car and on the track as well. So I want to say it was somewhere along the way that you’d mentioned that Sam’s heart rates and things actually settled down the faster the car goes.
William: Yeah, and that’s actually…like I said, I wasn’t on the team for the first run at Indy, but one of the stories that’s told is that when they ran they had, of course, biometrics on both Sam and the co-driver at the time, Andrew Dawes. And when I looked at the data, Sam hucking this thing 100 and some odd miles around Indy Motor Speedway, his heart rate was just only a little bit above resting and Andrew, who was in the co-driver’s seat, was nearly having a heart attack.
So it was far more nerve wracking for him. I think part of the reason was that Sam has raced Indy cars. For him, 100 miles an hour on that track isn’t that fast, so he was in his element. And then the big thing is, and you mentioned it, for us the focus of the SAM car really has been about the human element. How do you track the data from the driver? Everything, not just from the head position and the control inputs, because that’s really our main goal, is how do we give a person who doesn’t have the standard input/output, to kind of use an engineering tech term, for us, our input/output is our sense of touch and things like that, and our ability to manipulate the steering wheel. That’s a physical output.
Sam doesn’t have as much of that as we do, so it’s all been all about how do we add sensors to the vehicle to give him that much more control output? And so a lot of our focus hasn’t so much been on necessarily what an automotive designer’s focus would be on, which is engine, power train, chassis sensing, and all of those computers that go into that, it’s been more about, “Okay, how do we pull as much data from the driver and the co-driver and their condition as possible?”
Kevin: Oh, absolutely. And as you were saying that, it popped into my head when I had seen one of the interviews and they talked about the co-driver’s heart rates accelerating as, of course once again the dark humor aspect would be, “I’m riding around the track with a quadriplegic. Why would I not be concerned?” You know?
William: I can tell you this, though. Yeah, exactly. I can tell you this, though. After having worked with Sam for quite a while, Noel Marshall’s a current co-driver and I currently test drive the car a great deal. So she’ll be in the co-driver position and then I will be driving the car using the same head controls that Sam uses, and after weeks and weeks I’m fairly certain that Sam can drive that Corvette better now, with the systems we have, then I could using my hands and feet.
We took him out for a testing day, there was no other track time available so we actually went out to Bandimere Speedway here in Denver, and they have a go kart track off of their parking lot. And we took this full-sized Corvette, 440 horsepower some odd, whatever it is, Corvette and he drove it around that go kart track. And we did not realize until we brought a camera crew out and they had a drone, and we didn’t realize until we saw the drone footage, every single lap he did, he took a perfect racing line. Even though the track was barely wider than the car itself, he hit every turn in, hit every apex, he’ll ride up on the shoulder a little bit if he knows he can cut time, it was always perfect.
And I could not have done it. I could not have navigated that car around that track, and we were even wondering, “How is this going to work?” The way we were thinking about it as engineers was, “Man, if he can make it around this track, this go kart track, well heck yeah he can make it around an actual street course designed for Indy cars!” So it’s amazing how good he is, and I think our CEO Mike Long at one point said it in one of the videos, and he’s absolutely right. Sam never stopped being a race car driver. He just didn’t have a car to drive, and you can’t take that away from him.
And it’s even amazing watching him…our partner on the car, Freescale, built the simulator using their inertial sensing technology. Kind of the same technology that’s in a tablet or a cell phone to detect motion, and we were just down in Austin for their technology forum, FTF, and they had this running at their tech expo. And everybody else who tried to use that simulator couldn’t even get the car around the track. I think out of like, 100 people, maybe…In order to post a time you have to complete a lap, and I think only 15 people did. And he just showed up for an interview one day and we were like, “Do you want to try it?” And he’s like, “Heck yeah!”
So the Freescale guys kind of took the racing seat that was there and just kind of yanked it to the side, and we rolled his wheelchair right up and put the controller on his head, and the first lap he did, he got the second fastest time, and the second lap he did he beat every time that had ever been posted by anybody else by 10 seconds.
William: His second lap. And that was faster even than the engineers who had been working on the system for weeks. They had been sitting there playing with it, and it was unreal. So, yeah, it’s amazing how talented he is and it’s great to see him in the car. I had a moment early on in the project where, first time I had worked with Sam and I helped with the load in team, so we lift Sam into the car using external lift to get him in.
Luckily the Corvette has a removable kind of targa top, so we can remove part of the roof and we lower him in through that. And we lowered him in and I was focused, we were testing out some new sensors that we were going to be using for gas and brake that I had designed so I was really concerned with making sure that those worked and we got a lot of good data. And then all of a sudden we came to the end of the day and we’d been driving around for three or four hours or something like that, and all of a sudden we pulled the car up and it was time to take him back out of the car and that’s actually when it hit me of what we were doing. Because I had just watched him for three or four hours drive this car around, and we were in a big parking lot, just like anybody else would.
And I had honestly forgotten that he couldn’t get himself back out of the car, because you watch him drive around and you can’t tell any different. And it was that moment when I had to go, “Oh, now I have to go in and we have to help him back out of the car.” And that was really that first moment where it actually hit me that he couldn’t do this stuff anymore, just because he was so good with the actual car.
Kevin: Yeah, and I think you said it earlier. Everything in our world is designed for people who are fully capable of using arms, legs, full motion and that. And not to be too grandiose, but through our discussion here, two things pop into my head is when we set a target for building the SAM car, and basically what we’re saying is we’re going to build a car that’s fully drivable by a gentleman who doesn’t have the use of his arms and legs. And you head down that path, the not being too grandiose aspect is, think about when we went to the moon. You set a target out there of we’re going to do this, we’re going to go to the moon, and nothing like that had ever been done before.
The interesting thing from those kinds of ambitious targets is they tend to spin off technology. You develop things and you develop tools, and in hindsight now we look at rechargeable drills and tools and things that came from the need to head down that path. The other thing that comes to mind is I just saw a documentary recently. Japan, like most countries, the population is aging. Japan was very advanced in robotics, and they seem to be spending a lot of time around robotics these days of helping elderly move around and be more ambidextrous, so being able to get themselves out of say a wheelchair or a bed or something from that point of view.
So, sorry for the over the top lead in, but have we seen spots where some of the technologies and things that, and obviously you’re very focused on the sensor aspect. I would have to believe the sensor manufacturers are looking at, “Hey where else can we take this stuff and what are other applications for this stuff?”
William: Absolutely, and we kind of describe this project internally as our kind of moon shot, and it’s not in a grandiose way. The similarities are kind of in this is a project that’s very unique in the way it was conceived, because in many ways we were doing it just to do it. Just to say we’d done it. We don’t produce vehicles, we have no intention of turning around and creating a line of vehicles that can be driven by a quadriplegic.
As Arrow Electronics, that’s not really what we’re in the business to do. We’re in the business, if somebody wants to go out and do it, we will support them and certainly have components. We can sell them the parts we did it with and things like that. A lot of this really was just to prove that it could be done. Now the really interesting thing about this that I find quite meaningful is that, and I’ll explain why, is that it’s actually all done with mostly off the shelf technology. This is technology that exists today. It is possible today, with what we have available now. These are parts that I got off the shelf from Arrow suppliers.
I think that that’s important and that’s significant, particularly in this particular project. I’ve worked with other research projects that have targeted the disabled community, those with limited mobility, and kind of a common theme there is that really the technology that’s available to people with severe disabilities really is anywhere…It’s 10 to 20 years out of date.
I mean, really, you have more technology available to you, at your fingertips in your cell phone than most people who have to deal with paraplegia or quadriplegia have access to, and even that technology is really, most of the front line tech is at least 20 years old. So this is an application where what we’re doing is we’re taking the components and the technology that’s available today and we’re applying them to new problems. So when we’re tying it back into the Arrow five years out, what it is is about thinking about how obviously the technology that’s available from our suppliers today, how can we reapply that to new problems?
And I think that’s what’s kind of important to me, because it’s showing that this is not something that we say, “Oh, it’ll be possible one day.” It’s possible today, and it probably was possibly previously, too, but it just took somebody to kind of step up and spend the time and the effort to get it done.
Kevin: It’s an interesting observation. I come from a semi-conductor background and I think in the computer products area that Adam and I are in, in a lot of cases we look at, we’d be able to hit this new functionality as soon as a new Ethernet chip comes out or as soon as a new controller comes out, from that point of view. And as you’re saying, I think a lot of the technology is there today, it’s just a matter of the application.
Adam: I’m just kind of curious, Will. So you mention all the sensors and the data, right, and getting good data. Where is that collected today? Is that within the sensor itself or how do you guys actually go back and review that data?
William: Well that’s an evolving problem that we’re actually coming up with some new solutions for right now. We have a main guidance computer that aggregates all the data, primarily to take actions with it. So right now the way everything works is that we have all of our sensing systems essentially talk over gigabit Ethernet, and so our main guidance computer sits on the network and it basically pulls in all that packet data and then if…the primary goal of course is to be able to execute steering, gas, brake outputs through our drive-by wire system, which is over at CAM bus.
Now at the same time, that also makes it quite easy for us to aggregate all that data because we can just basically…if we have to we can even just do packet capture. Right now, one of the things that we’re working with another Arrow business group, Arrow SI, is we’re working on, we’ve installed a cloud gateway into the car that is working on helping us not only aggregate that data, but we’re also uploading all that data now to a Microsoft Azure cloud database so that the data’s going to be more easily accessible, we can more easily do analysis on it, they’ve created a dashboard for it.
And so we’re working right now on getting all that connectivity piece in and tested, and the goal with that is to have data be able to come out live from the car as you’re driving it. And then also as a backup it will be recording that data and then we can upload it later if need be, as well.
Adam: Right, awesome. And then what would be like, can you give us, maybe just a little insight into what would you do with that real time data as it comes out of the car? How would you foresee using that?
William: The big one for us is that really we treat it the same kind of ways right now race teams would. Sam has a very racer mentality. So, a lot of the stuff is is he’d like to know how fast he was going where. So it’s kind of pulling together all the information on tying it together with our high accuracy GPS, what were his steering and throttle inputs at one point? We’d like to see information on what is the driver’s physiological response, which is why we’re going to be wearing health bands in the car and things like that.
Also, there’s just simply from an engineering point of view, we’d like to get a lot of diagnostics out of the car. We have diagnostic packets that tell us everything from how sensory communication’s going, to failure mode analysis, all that kind of information. And especially since the car is still very much a prototype, a lot of what we’re hoping to use it for is as we’re testing new sensor systems in the car, we’d like to see what the response is real time and whatnot. And the important thing for us there, like I said, one of the biggest things we have to overcome in the vehicle is the fact that the vehicle is moving around.
So there are certain instances, for example one that I can think of that we’ve been struggling with from time to time, is if the sun is low in the sky and you turn a corner and all of a sudden the sun shines right to the back window and basically blinds the cameras, we’d like to be able to correlate that together. So if we can see at a certain point, “Okay, our camera system is losing track of the driver’s head at this point, in this corner, at this time,” we can then work back and then figure out, “Okay, that’s what’s going wrong right there and we can figure out a way to work around it.”
Adam: Interesting. How do you handle that today? Like let’s say Sam sneezes when he’s driving the car. How is that corrected?
William: That’s actually probably the most common question we get when we go to events where we talk about the car. “What happens when he sneezes?”
William: There’s a couple answers to that. The first one is simply how the control system works right now. So, originally Sam would steer the car by tilting his head to the left and right. If you’re using the aviation standard terminology, that would be roll. So if you’re imagining an axis coming straight through the driver’s head, it’s rotation about that axis. That system worked very well for Indy Motor Speedway. What we’re using right now for the road course, just because it gives us a little bit more range of motion, is yaw input. So it’s the axis going straight through the top of the head, not the bottom, and it’s that yaw angle.
So the car steers where you look. So our camera system is able to very accurately measure those angles in space, and so if he sneezes or something like that, as long as he’s not moving his head wildly, you’re not going to have much problem with that. That actually was a consideration in choosing the yaw input. We do do some smoothing on the input data, so we take out any sudden jerky movements. At this time, right now, the system is always live though, so one thing he can’t currently do is free look around the cockpit.
So it’s not like he can all of a sudden check a blind spot or something like that. Sam, in an interview with a local news station, is quoted as saying at one point, “You have to keep the pretty girls out of the stands, because you don’t want any sudden head movement.”
Kevin: That’s awesome.
William: He’s quite funny about it. One of the things that we’re looking at on our road map is start trying to find a way to allow him to free look around the cab a little bit more with the yaw input. With the roll input he could. So he could be looking all the way to his right, and we could detect what that roll angle was still. With yaw, he can’t do that as much, so that’s one of those problems that we’re always working through, we’re always looking for new solutions on.
Adam: That’s awesome. Thanks. Yeah, good stuff. Okay, what else have you got, man? I mean, I’ve got some other thoughts but they probably pertain more to how you and I would interpret the analytics and big data side of this story that Will’s telling us.
Kevin: You know, and I think that’s…until we start gathering up info, and I think that’s with any of the big data practices, we’ll hit a point in time where data analyzation starts turning into this function of we didn’t know that, or I have a feeling over time if we’re capturing and retaining the data and information, there are going to be times probably in car number two, where you’re going to look at going back and saying, “Well, how did this respond in car one? Let’s go back and look at some of the telemetry, look at some of the data and information and better understand that.”
So I think it’s a lot of historical stuff over time will start to play out in that as well. Kind of what you’re saying about the off the shelf technology, too, I think there is this momentum or movement that we just need to look at things differently and look at how we’re working with technology. And like you said, the compute power and performance of the…I grew up in college with 8-bit processors running assembly code, and we could do some pretty major things with that. Now we’ve got way, way beyond that capability.
William: Yeah, and I think certainly that’s one of the big things that we’re looking at with a lot of this is that there’s so much compute power available, and really one of the big things that we had to do, for example on SAM 2.0, was simplify the system. The SAM 1.0 we threw a lot of computing hardware at it. There was a lot of hardware in the back of that car, and if you were to look in the back of the car where everything is mounted to this quarter inch aluminum plate sitting in the back of the car, between Indy and Long Beach, we removed probably 60% of that computing hardware.
We actually simplified the system down a great deal. And a lot of that simply came from, okay, we’ve got more experience with the vehicle. We learned that there were simpler solutions to certain problems, and in the end, a lot of it was just experience. We didn’t necessarily have the big data to crunch, but it was kind of that operating experience that we had with it. And we were able to really simplify things down.
Adam: Interesting. You guys said, and I don’t know, this may be a futurous question. I don’t know if you can answer or not, but I’ll ask anyway. Any intentions of allowing the fans in the stands to interact with the SAM car? I mean you think about major league baseball and you can turn into the analytics and you get all this information on the batter, the pitcher, or the historicals. Anything like that in our future of maybe enhancing the, I guess I’ll use the word consumer, right? Of the event of watching Sam drive?
William: We have definitely been working on that, and Arrow SI has been working very hard. Now that they’ve got the data connection between the car and the cloud and we’ve got everything up in the Azure database, that is their next big step. I know that they’re working on a number of fan experience things. Of course the first stage would of course be, they’ve got a very nice, I’ve seen some of the demos, the kind of prototypes of the website dashboard that they have just so that you can read the data out.
But we’ve also been working on, like I said, all throughout the process we’ve developed various types of controllers that work on the same technology, so one of the things that is in the future in the road map is being able to develop an interactive video game that you can control the same way Sam steers the car. In fact, last summer I worked on…we built a fairly straightforward gaming controller that you can clip onto a hat and be able to control a video game the same way.
So I know Arrow SI’s working on, for example, being able to take all of that telemetry data that we pulled off the SAM car and in essence, finding a way to be able to…you could race a ghost SAM car. You could race Sam in a virtual environment yourself, kind of in a racing simulator type way.
Adam: Wow, that’s pretty awesome. That’s cool. So you mentioned health care I think in some prior commentary and just the legacy rider or the fact that, hey, we seem to be trailing behind where we sit as a community from consuming technology, and I can confirm that. So not necessarily on the consumer side, but part of coming to Arrow, I worked in some large healthcare organizations in the data center, and their applications were certainly 10 years behind, right? Which is maybe preventing some of that faster adoption of relevant technology, such as you mentioned. So we’re taking all this cool learning and exercise with SAM. What are some of the other ideas that we’re doing? How are we taking this data and helping our partners, our customers, become better at making some of those decisions and driving their business forward?
William: Yeah, so, just to touch on the same point you were talking about. Previously I’d worked on some research around, it’s called Brain Computer Interfaces at the University of Denver, targeted towards patients who were, the term is locked in. So some of them can’t even move their head, they can only move, say their eyes or something like that. And we were looking at technologies to be able to assist them to do something as simple as call a nurse, basically hit a nurse call button, because they couldn’t normally, and some of them weren’t even able to use things like Sip-and-Puff tubes, which is kind of the…that technology’s been around for I think more than 30 years now, and that’s still the front line technology in a lot of this.
And to kind of give you kind of a story, we use now, for the road course, a Sip-and-Puff tube interface. The interesting thing about it is that your standard Sip-and-Puff system hasn’t changed in decades, and actually, a lot of them are still mechanical switches. When you blow in, it’s like an on/off. That’s all you get. So a puff would be one switch and a suck would be another switch, and it’s kind of one and zero kind of thing. Or I guess, well, one zero one.
But we talked with a company that made a type of headset that would fit around the head and hold the tube in place, and they’d worked…they’re like the first company that comes up on Google when you Google Sip-and-Puff headset or something like that. And you call them up and it’s a tiny company, I think it’s like, 20 people or something like that, and they’re really nice people but they will fully admit that that particular market is, the reality is, thankfully there aren’t that many people relatively speaking, that deal with severe disabilities when you compare it to the entire consumer market say in the United States, 300 plus million consumers or whatever the number is.
So it’s not a large market, and as a result you don’t have a very high volume, you’re not going to make a whole lot of revenue off of this. Prices tend to be very very high, and it’s typical of a lot of the customers that we work with out of North American Components that work in the medical industry. Their biggest challenges are whenever we work with them, their first question is always about part longevity, because they know, especially if it’s a medical device, the testing certification process is huge and it’s arcane and it’s complex and it takes forever.
Their development cycle may be 5 years, and in a world where d-ram refreshes like every 6 months or something like that, and you’re having to certify a part and it’s going to take you 5 years to get it certified, and then you have to be able to guarantee it for another 10 years in use after that, that’s a huge challenge. And so we out of North American Components help customers with that every single day, and we’ve got various programs to help if they need to do a lifetime buy, or if they need…a lot of it is there’s always the consumer grade components and a lot of times you have to work with defense contracting customers, people going all the way up to air crafts and things like that, so extended temperature ranges and certifications and it’s a very very complex business. And that’s part of the reason why the technology is so far behind what we see on a consumer level every day.
William: In the end it just means it’s very expensive, it’s not necessarily as current as a lot of other technologies, so to go back to the example of the Sip-and-Puff sensor, I got on the phone with one of their engineers and he was like, “Oh, I heard about the SAM car.” And the original SAM car, the gas and brake system was completely different and for braking, Sam actually bit down on a, it’s basically a four sensitive resistor. It was just a little sensor in the mouth he bit down on and that was it.
He’s like, “I saw that. I would love to hear more about how you did that, because I have customers that would love to use that.” And I’m sitting there thinking, “We’ve got all of this technology, all of this computing power in the SAM car, and he’s interested in the single, simplest technology in the entire thing.” It’s literally a four sensitive resister into an ADC. It is the simplest thing on the entire car, but for them, that was like the coolest thing because they didn’t have it yet.
And then I asked him, well like, “We’re planning on using a Sip-and-Puff system.” And he’s like, “We thought about it.” And he was coming from the world of…so some people control wheelchairs with Sip-and-Puff sensors, and the way that works is you have four outputs. You have a strong puff, a soft puff, a strong sip, and a soft sip. And that’s it. And again it works like switches. So it’s kind of like an on/off output, and each one of those basically is a separate switch. And so he wasn’t really familiar with the idea. What I was talking about was we needed continuous kind of output. We needed him to be able to have very fine control over gas and brake, and so right now I should probably explain.
He blows into the tube to accelerate and he sucks to brake, and he’s got very fine control over both. So he can feather the throttle and brake so that when he’s going through a road course into a turn, he can brake in, turn in at the apex, accelerate out with a lot of control. And when I was talking with this engineer for this company, he was going, “Well, we kind of have that in one of our products but we’ve never been very successful with it and I don’t think it would really work.” And very quickly I was able to go out to one of our suppliers, Freescale, who’s got a great pressure sensor that it works in precisely the range that we need, good output, all this kind of stuff, and we prototyped it very quickly.
And very quickly we were able to come up with a new type of an old technology that was much more sensitive, had a lot more range of control, and in fact over the summer we had our interns take that technology and they created a headset system where they used inertial senses to be able to move the head around. So going back to your earlier question, this is an example of where we were taking, again, existing technology from Freescale and demoing an application of it. Kind of showing how you were able to integrate three different inertial sensing solutions and the software and everything else into a very easy to develop and use package.
And they also used that same pressure sensor and they could do stuff like move a mouse around with it and control. So their system that they could very easily with their head take an Android tablet or a Windows PC and all of a sudden they could use it and interact just like the same mouse pointer, just using the Sip-and-Puff input and their head movement. So that was a new application for that technology, and it was kind of an evolution of an older technology, the standard Sip-and-Puff system, using newer sensors and technology that’s available today.
Adam: Interesting. I loved your definition before of our philosophy around five years out. I think that’s a great example of it isn’t necessarily thinking five years from now, it’s about applying technologies that are available to opportunities that can provide advancement in industries that are maybe lacking in the consumption of newer tech. So that’s awesome. I’d never really thought of it that way and that’s pretty cool, so I’ve learned some stuff from today’s call.
William: Oh, I was going to say that the big thing there is obviously the way I see, you know, we talk a lot about five years out, but for me is it was exactly that. We want to…our customers are the ones thinking of the applications five years out, and we as Arrow, it’s our job to get them the technology that’s available today or the technology that’s available in the very near future, to achieve that. So we want to be supporting the innovators. We want to help the companies that are out there looking to do these next big ideas, and the SAM car’s an example of how we were able to pool resources.
A great example is the camera system in the car is from a company called Natural Point. Natural Point is a customer of ours. We sell them parts and components that go into their tracking system that we then bought to put back in the car. Our main guidance computer is from, it uses a Freescale i.mx6 processor, and it’s from a supplier of ours called Boundary Devices. So they are a supplier of ours, we sell their boards, and they also buy the parts to make their boards from us.
So we’re at the center of all of this technology. The camera computer in the back is from a company called Lanar [SP]. It’s an hardened automotive PC. I believe they’re through OCS. So all of the technology in the car just demonstrated, it’s all stuff Arrow can help customers achieve.
Adam: Wow, that’s awesome. Yeah, no, great. Thank you for the insight. It’s been awesome.
William: That’s very good stuff. Especially when you’re talking about some of the sensor things. I seem to run into that a lot of…in many cases the folks who create these sensors and devices are so focused on the sensor or the device and what they perceive the market to be, and I think in a lot of cases it’s how we, from an opportunistic point of view, can help point them in opportunities they maybe haven’t seen before. I’m probably getting way out beyond my skis here, but during our IOT road shows…I grew up on a dairy farm, very familiar with cattle, and one of the projects there was a very elaborate stethoscope, a very high quality stethoscope.
And the company that had developed it was really looking at how do we make the best quality stethoscope, and then it was, well how about recording capabilities? And then really what it turned into was well, maybe in large feed lots or large cattle, large dairy farms we would have the ability to do heart sounds, lung sounds from cattle and then have that run through your cellphone back to a database and do pre-comparisons from that point of view and be able to pre-predict pneumonia or other heart related issues.
So all of a sudden you’re taking a company who was out to build a really good stethoscope, a stethoscope like you hadn’t seen before and all of a sudden the business model has shifted into one of analysis. Of being able to record those and compare them to known good, known bad, so it gets back to that big data function of expanding the market, expanding the opportunity with existing technology today. I think we’re expecting a lot of this stuff to be developed and explored out there, and a lot of it is already there. It just needs to be refined and applying new markets to it, and I think that’s where internally at Arrow, you’ve mentioned some of the different groups, Arrow SI, we’re Arrow ECS, lot of different Arrow groups in the company.
We’re finding that the interaction between our teams is helping drive a lot of this as well, so where you’re working with OCS and SI and your own components group, and then as we roll in more of ECS from maybe the backside infrastructure and big data analysis activity, there’s a lot that can be brought together from a total customer experience point of view. So for Arrow employees viewing this, think in those kind of terms of bring all the resources to bear at your customer that Arrow has to offer up.
Kevin: Good stuff. So we’re kind of coming into the end. I’m always terrible at closing out podcasts and calls and things of that nature. We’ve hit on a lot of different topics, really interesting stuff, and very much appreciate your time. Any other things that we haven’t touched on that we had down as things you’d like to hit on in the conversation here?
William: I think we’ve hit all the high points.
Kevin: Okay. That sounds good. Adam we get all your questions answered and squared away?
Adam: Yeah, I think so. I’ll leave our viewers with I guess kind of my parting thoughts of my eyes have been opened. They were opened at our IOT event, they’ve been opened on this call. The last example that we talked about with the dairy farms, I mean that’s awesome and it’s a good example of IOT in the consumption. But it’s specific to that dairy farm, and I’m just blown away with the possibility of well, what happens if we aggregate all that data across all the farms in that territory, and how much more effective can we be in the analytics and how much more predictive can we be in the health of those particular cows in that example?
And I think you can liken that to just the history of healthcare and building those databases and the publications and the research that the doctors have done, but it’s taken so much time and I think we’re just on the edge of having a lot of that be instantaneous and the way that we’re going to search Google in the future will be so much more powerful with the answers that we’ll get.
And I can’t help but think that maybe a little dark side of the humor too, here, that one day wouldn’t it be cool to see Sam race in a race again with his peer group now in the form of other quadriplegics, and how interesting that would be to watch and just try to understand from a human perspective. So, very cool. Well, thank you so much for your time today, it’s been awesome.
William: One of the things that we just talked about on one of the things we’re having to hope to do, obviously with the SAM car. So, obviously with the SAM car we haven’t been able to race live against anybody else, but one of the things I was just out in LA with a company that’s building us a new, high-end racing simulator. And one of the things that they’d like to do is the company that makes the software for it, iRacing, holds a yearly professional competition against professional race car drivers, and apparently they had one of Sam’s former drivers, Townsend Bell in there and they mentioned that they would love to have Sam compete in this simulator competition, this kind of virtual grand prix. So hopefully by the end of the year, there’ll be opportunity for Sam actually to get back in a virtual world, but actually be able to race against other race car drivers again.
Adam: Wow, that’s awesome. So cool that we’re part of that.
Kevin: Very interesting where the technology’s taking us. Well with that, we’ll wrap up here. Once again this has been Arrow On Point, a podcast from Arrow’s engineering team. Today we were talking with William about the SAM car and the future of SAM car 2.0, and some of the applications around sensors, IOT, and big data. Thanks for watching, and we’ll have more episodes in the future. And with that, fade to black.