NASA Robotics Competition

NASA Robotics Competition

Postby aolofsson » Sat Aug 17, 2013 2:57 pm

I am wondering if the Parallella couldn't be a "secret weapon" for teams interested in this major crowd sourcing effort by NASA?

--Announcement below---
Registration is now open for teams wishing to compete in the $1.495 million robotics competition known as the Sample Return Robot Challenge, sponsored by NASA and managed by Worcester Polytechnic Institute of Worcester, MA. Registration for the competition will close on January 7, 2014 with late registration available until March 15, 2014. The competition will be held June 11-13, 2014.
For information about the Sample Return Robot Challenge rules, requirements, and how to register, visit:

edit: fixed link

http://www.nasa.gov/press/2013/august/t ... g_Sdm08mSo

"The objective of the competition is to encourage innovations in automatic navigation and robotic manipulator technologies that NASA could incorporate into future missions," said Michael Gazarik, NASA's associate administrator for space technology in Washington. "Innovations stemming from this challenge may improve NASA's capability to explore an asteroid or Mars, and advance robotic technology for use in industries and applications here on Earth."

To win, a team must demonstrate a fully autonomous robot that can seek out samples and return them to a designated point within a set time period. Robots will be required to navigate over unknown terrain, around obstacles, and in varied lighting conditions without human control, or use of GPS, or other terrestrial navigation aids.

This is a Centennial Challenge in which NASA provides the prize purse for technological achievements. The challenge is extended to individuals, groups and companies. Unlike most contracts or grants, awards will be made only after solutions are demonstrated successfully. Since the program's inception in 2005, NASA's Centennial Challenges has awarded more than $6 million to 15 different competition-winning teams through 24 events. Competitors have included private companies, citizen inventors and academia working outside the traditional aerospace industry.

The Sample Return Robot Challenge is part of the Centennial Challenges Program within NASA's Space Technology Mission Directorate, which is innovating, developing, testing and flying hardware for use in NASA's future missions. For more information about NASA's investment in space technology, visit: http://www.nasa.gov/spacetech
User avatar
aolofsson
 
Posts: 1005
Joined: Tue Dec 11, 2012 6:59 pm
Location: Lexington, Massachusetts,USA

Re: NASA Robotics Competition

Postby stealthpaladin » Sun Aug 18, 2013 4:46 am

I am game for being on a team building mini recon bots =)

After seeing how Arduino enthusiasts went after running bots via NodeJS, I've worked out an architecture of intercommunicating mechanical or compute components (or even data components like XML). This based on an framework for computing across many environments. If interested, I would be very pleased to cooperate implementing this project and to donate a licensed copy of the framework to the project.

Now I'm only talking about the higher level stuff here, I know my Linux environment quite well. If folks are able to write up individual processing features, we can use this to tie them all together across different mechanics physical and virtual.



As for the actual work of detection, I think the Parallella is pretty well inclined for robotics. When you look at most boards used in robotics, it's not very common to see much in the way of processing going on. Surely in a contest like this we'll see lots of GPUs/APUs/FPGAs/special purpose ARM setups and who knows what else.

One great thing about this board is the different processors (not cores) available - each having their own layers of sensitivity/time/resolution. At the same time each has a different style of ingesting, processing and reporting data. Given that such a multi-purpose robot will have subsystems with widely different tendencies, you can pick the best processor for the particular job and get very large wins in both development time, power/data efficiency and straight usefulness.


Alas I have not yet actually USED an epiphany so I can only comment so far. However theorizing about it's mechanics I'd say a good first example of this feature would be:

- Epiphany is simultaneously handling visuals with ints, audio with floats using binary standing wave interference patterns to compare arbitrary natural waveforms
- FPGA is running a passive triggerable systems such as motive force in a certain direction and keeping proper tension at three angles (for mini support poles)
- ARM is hosting interprocess communication via simple protocols. Capable of marshaling DMA and streaming between specific devices with small amount of work.
- Analog unit watching EMF ? Seismic activity? not sure exactly what that thing is capable of at this point.
User avatar
stealthpaladin
 
Posts: 41
Joined: Sat Jul 20, 2013 9:46 am

Re: NASA Robotics Competition

Postby Gravis » Sun Aug 18, 2013 1:07 pm

stealthpaladin,

a few notes:
  • you should read the rules for the competition because a lot of the stuff you mention is not relevant and you dont need recon bots as it'll just complicate matters.
  • video processing almost always floating point based.
  • the epiphany itself can directly connect with other epiphany chips (via PEC connectors on Parallella) to effectively connect all the cores into one big epiphany chip, up to 4096 cores. so there is no need for your framework.
these competitions seem like they are mostly about making software for processing sensor data and a bit of robotics. the electronic development required is fairly limited as it's just a matter of reading sensors and processing it which is usually done through a laptop because speed isn't a big issue.

i'm not trying to crush your dreams or anything, this is just some productive criticism.
User avatar
Gravis
 
Posts: 445
Joined: Mon Dec 17, 2012 3:27 am
Location: East coast USA.

Re: NASA Robotics Competition

Postby stealthpaladin » Mon Aug 19, 2013 3:50 am

Gravis wrote:stealthpaladin,

a few notes:
  • you should read the rules for the competition because a lot of the stuff you mention is not relevant and you dont need recon bots as it'll just complicate matters.
  • video processing almost always floating point based.
  • the epiphany itself can directly connect with other epiphany chips (via PEC connectors on Parallella) to effectively connect all the cores into one big epiphany chip, up to 4096 cores. so there is no need for your framework.
these competitions seem like they are mostly about making software for processing sensor data and a bit of robotics. the electronic development required is fairly limited as it's just a matter of reading sensors and processing it which is usually done through a laptop because speed isn't a big issue.

i'm not trying to crush your dreams or anything, this is just some productive criticism.


No offense taken, I wouldn't want to start anything off with the wrong idea.
Perhaps I am missing something but honestly I think you might be jumping to a few conclusions.

point 1. I did read the rules, and maybe I shouldn't use colorful terms like 'recon bot' but that's simply my take on what an individual unit sounds like. the bot goes out, collects info, survives while doing so, and returns effectively. Is that more or less not the gist of it??

point 2. I am familiar with how float analysis of video is generally the 'gold standard', however I generally follow a different methodology because I'm rather good with integer overflow and bitmask techniques where floats can tend to get in the way of precision. Totally fine to use both on the same stream for different tasks but for instance when handling ultra-hd color channels of sufficient size, float can start to round things -or- incur large amounts of recursive processing where ints may not be forced to.

point 3. I think this is somewhat missing the main aspect of what the framework's use case here would be. I'm suggesting it will be useful to task -all- of the parallella's processoring capabilities, not just the epiphany. Primarily I'm talking about having system automation at a high level which is very slow relatively compared to the actual data streams being processed.

for instance if you are simultaneous processing input from different subsystems and sending messages to cause other subsystems to react in accord you need something like this. more about software architecture than hardware architecture, this really doesn't relate to how many epiphany cores are available. It's for chaining together functionality made for the epiphany into a software structure that matches the end-product robot.

Does this clear up anything?
User avatar
stealthpaladin
 
Posts: 41
Joined: Sat Jul 20, 2013 9:46 am

Re: NASA Robotics Competition

Postby stealthpaladin » Mon Aug 19, 2013 4:04 am

Quick note on the float subject, basically I start by using the value of an unsigned int as a ratio to it's maximum 'all bits to 1' value.
you can essentially get a customized and very swift version of float's ability to jump magnitude and express ratios by defining various mask operators.

For instance the ability to modulo any number by any number within 4-16 cycles is possible by quickly sorting out it's affinity to a power-of-two exponent and applying an offset.
Divide by multiply is also very easy using this by changing magnitude correctly before applying a ratio.

These concepts are used in a signal processing library I've written in C++ and also used to implement a "spectrum" data primitive in some other languages.
I've had a few posts about similar things that you've responded to so I wanted to mention that it's just something I work on and enjoy porting to different architectures. At the same time it's got a number of novel uses.
User avatar
stealthpaladin
 
Posts: 41
Joined: Sat Jul 20, 2013 9:46 am

Re: NASA Robotics Competition

Postby Gravis » Tue Aug 20, 2013 9:01 am

stealthpaladin wrote:point 3. I think this is somewhat missing the main aspect of what the framework's use case here would be. I'm suggesting it will be useful to task -all- of the parallella's processoring capabilities, not just the epiphany. Primarily I'm talking about having system automation at a high level which is very slow relatively compared to the actual data streams being processed.


the ARM capabilities are miniscule in comparison and i wouldnt want to needlessly put verilog/vhdl in the mix. besides, if you need more processing power, just link the epiphany chips using multiple parallella boards. processing power is cheap, time is not.
User avatar
Gravis
 
Posts: 445
Joined: Mon Dec 17, 2012 3:27 am
Location: East coast USA.

Re: NASA Robotics Competition

Postby stealthpaladin » Wed Aug 21, 2013 4:08 am

Gravis wrote:the ARM capabilities are miniscule in comparison and i wouldnt want to needlessly put verilog/vhdl in the mix. besides, if you need more processing power, just link the epiphany chips using multiple parallella boards. processing power is cheap, time is not.


Definitely this is true and that is still the case. The point of a higher level orchestration framework running on the ARM is completely separate from the specialized tasks being run on the epiphany(ies). This wont introduce any performance penalties to your individual HPC needs what-so-ever.

The use case is clearer if you consider the chain of events and separation of concerns needed to have a bot wander around on it's own and operating in different contexts. Without getting lost, stuck, destroyed or otherwise incapacitated, it's got to be able to intelligently detect and react to situations over time based on different results from different processes.

Running OpenCL, you've got the ARM acting as the host and the Epiphany operating as a client device. Essentially what I'm talking about is a robust host server which has structure and performance in the host domain that is very highly useful for general robotics or factory automation. I can go into further detail on that but it's important to get that the framework is not what's processing all the fancy stuff, it's doing fancy component routing, messaging and whatnot essentially turning the linux side into a profile of your given (problem domain x machine environment) for the Epiphany to accelerate by JIT tasking of pre-made utilities.

This especially makes it easy for anyone to contribute new software capabilities for the Epiphany that can be easily managed from the terminal, gui or web interface. (for other projects)

In this setting any useful Epiphany process, written in whatever language you manage to get it in, can easily become part of an Epiphany ABI and the host ARM can utilize it intelligently without getting in the way of your process at all.
User avatar
stealthpaladin
 
Posts: 41
Joined: Sat Jul 20, 2013 9:46 am

Re: NASA Robotics Competition

Postby Gravis » Wed Aug 21, 2013 10:38 am

The point of a higher level orchestration framework running on the ARM is completely separate from the specialized tasks being run on the epiphany(ies). This wont introduce any performance penalties to your individual HPC needs what-so-ever.

Running OpenCL, you've got the ARM acting as the host and the Epiphany operating as a client device. Essentially what I'm talking about is a robust host server which has structure and performance in the host domain that is very highly useful for general robotics or factory automation.

why use the ARM at all? the penalty would be a single core being used which is a good trade-off for simplicity. like i said, processing power is cheap, add another epiphany to the mix and you get a lot more. also, why would you bother with anything but native code?

you can cut out the ARM completely by using the FPGA for initialization and external memory I/O. 32KB per core is plenty but if you insist on more, just load some from external memory. it's far less complex and much faster. why complicate matters?

This especially makes it easy for anyone to contribute new software capabilities for the Epiphany that can be easily managed from the terminal, gui or web interface. (for other projects)

sure you could have that stuff using the ARM to monitor the epiphany status but there is no need to have it be involved in anything further than providing a gdb server interface. no OS is required at all.
User avatar
Gravis
 
Posts: 445
Joined: Mon Dec 17, 2012 3:27 am
Location: East coast USA.

Re: NASA Robotics Competition

Postby stealthpaladin » Thu Aug 22, 2013 9:53 am

Gravis wrote:
why use the ARM at all? the penalty would be a single core being used which is a good trade-off for simplicity. like i said, processing power is cheap, add another epiphany to the mix and you get a lot more. also, why would you bother with anything but native code?

you can cut out the ARM completely by using the FPGA for initialization and external memory I/O. 32KB per core is plenty but if you insist on more, just load some from external memory. it's far less complex and much faster. why complicate matters?


Well...
Reason #1 is foremost to use the two cores on the arm. They exist why would I not use them? This relates some to complexity, but as I've got a framework - it makes using the Epiphany *and* the FPGA vastly less complex. The only cost to achieve simplifying use, is that the ARM is busy - so great win!

Reason #2 is because both Epiphany and FPGA are really intended to be dedicated to a particular task or set of tasks at a given time -- often while handling a large stream or dataset. While you are doing that, losing a chunk of the processor means you can no longer use the WHOLE processor for the task. Maybe you can have a subset of cores switch in and out of some kind of "system mode" but this still breaks the throughput.

Reason #3 is for modularity and accessibility. Most software developers are in a different domain than epiphany developers. Even so, many would be able to use an interface they are familiar with to task packages made for the epiphany or FPGA from the host. Interfaces such as terminal/cgi, JSON/XML rpc, GUI and web interface are automatically generated on the host side.

Reason #4 is for architecture. The ability to take these things and map a machine to chain of contexts. Somewhat out of the scope of this thread to identify the usefulness of this in robotics/significant mechanical equipment.

Reason #5 Providing events to and keeping meta-systems async in a multi-architecture without having to think about it as hard.

Given that these features come out of the box, with others, and the only cost is that the ARM is in use - it makes it much easier for developers to approach a project.

Gravis wrote:sure you could have that stuff using the ARM to monitor the epiphany status but there is no need to have it be involved in anything further than providing a gdb server interface. no OS is required at all.


Without it being required, it makes sense to utilize it. The dual-core ARM is in a great position to orchestrate beyond monitoring, with monitoring being a side effect. There's certainly very little downside to using any high-level deployment. Any dedicated processing on the Epiphany or FPGA can easily set up and continue on their own.
Last edited by stealthpaladin on Thu Aug 22, 2013 10:01 pm, edited 1 time in total.
User avatar
stealthpaladin
 
Posts: 41
Joined: Sat Jul 20, 2013 9:46 am

Re: NASA Robotics Competition

Postby shr » Tue Sep 24, 2013 8:07 pm

Gravis wrote:...
why use the ARM at all? the penalty would be a single core being used which is a good trade-off for simplicity. like i said, processing power is cheap, add another epiphany to the mix and you get a lot more. also, why would you bother with anything but native code?

you can cut out the ARM completely by using the FPGA for initialization and external memory I/O. 32KB per core is plenty but if you insist on more, just load some from external memory. it's far less complex and much faster. why complicate matters?
...


Different strokes for different folks. Using the ARM for high level supervision and coordination broadens the potential tools and developers that can be applied. Leveraging the FPGA is more hardware efficent. Leveraging the ARM/Linux and high level tools is more developer efficient. Either approach could be effective for the Sample Return Robot Challenge. The promise of such flexibility is one of the attractive aspects of the Parallella. It would be interesting to see teams representing both approaches in the competition.
“At that time [1909] the chief engineer was almost always the chief test pilot as well. That had the fortunate result of eliminating poor engineering early in aviation” — Igor Sikorsky
shr
 
Posts: 23
Joined: Mon Dec 17, 2012 3:29 am
Location: Lyons, Colorado, USA

Next

Return to Robotics

Who is online

Users browsing this forum: No registered users and 1 guest