User:Eb4890: Difference between revisions
No edit summary |
|||
(2 intermediate revisions by the same user not shown) | |||
Line 23: | Line 23: | ||
===Android Robotics=== | ===Android Robotics=== | ||
See [http://www.cellbots.com/| cellbots] for current work. | See [http://www.cellbots.com/| cellbots] for other peoples current work. | ||
====Why==== | ====Why==== | ||
Android phones/tablets will get stupidly cheap in the future. They also have a bunch of sensors/wireless comms and the ability to connect to random micro-controller based usb devices. They also have GPUs to give a bunch of parallel processing power for a low price, | Android phones/tablets will get stupidly cheap in the future. They also have a bunch of sensors/wireless comms and the ability to connect to random micro-controller based usb devices. They also have GPUs to give a bunch of parallel processing power for a low price, although that can't be accessed nicely at the moment. | ||
They do compete somewhat with raspberry pis (raspberry pis being stripped down phones without the sensors/displays etc). I expect two ecosystems will emerge, with android phones and the OS being used for robots that need comms and lots of phone functionality, with micro controllers being used as for autonomic procedures and low latency apps. Raspberry PI and rooted phones with non-android roms will be used for situations where you need lots of processing power close to the metal. | They do compete somewhat with raspberry pis (raspberry pis being stripped down phones without the sensors/displays etc). I expect two ecosystems will emerge, with android phones and the OS being used for robots that need comms and lots of phone functionality, with micro controllers being used as for autonomic procedures and low latency apps. Raspberry PI and rooted phones with non-android roms will be used for situations where you need lots of processing power close to the metal. | ||
Probably the best positioning for the android phone is in a cheap/easy robot? Perhaps based upon a RC car. The phone/tablet would act as brains and eyes/ears and the RC car the locomotion. | |||
The camera would be used for depth mapping and object recognition. Use google apis for speech recognition? | |||
====Sensor Usage==== | |||
* Accelerometer to check if it bumps into things | |||
* Compass and accelerometer for dead reckoning to help slam | |||
* Camera for depth mapping | |||
* Camera for object recognition | |||
* Microphone to detect problems with motors? Also movement commands? | |||
====Things to do==== | ====Things to do==== | ||
* Decide on a method taking advantage of multi-core and GPUs on android. This either means OpenCL, Render Script or hackily with openGL. OpenCL is more cross platform but Render Script is working on android today (at least for multicore) | * Decide on a method taking advantage of multi-core and GPUs on android. This either means OpenCL, Render Script or hackily with openGL. OpenCL is more cross platform but Render Script is working on android today (at least for multicore). GLSL scripts would allow use of GPU today and be usable in rasberry pi (and GPUCV covers some of it already) | ||
* Mod OpenCV for android using the above PP framework. | * Mod OpenCV for android using the above PP framework. | ||
* Port of PCL for android with above considerations. | * Port of PCL for android with above considerations. | ||
Line 53: | Line 65: | ||
* Too many questions marks? | * Too many questions marks? | ||
====Things to investigate==== | |||
====Things to investigate now==== | |||
* Rosjava - http://code.google.com/p/rosjava/ | * Rosjava - http://code.google.com/p/rosjava/ | ||
*I'm currently trying to figure out how easy/quick it would be to create a 1D depth map from a slice of video from an android phone. For doing mapping/navigation e.g. SLAM for a robot. It could probably do with a LED light as well to illuminate and create areas of contrast. For 2D stuff it seems like people detect features such as corners. For 1D stuff detecting edges and gradients should suffice. And be easy-ish. It might be possible to use orientation from the phone as information on which direction to look. Feature extraction would be parallel, feature comparison would be serial. | |||
====Things to investigate later==== | |||
* Integration with Kinnect? Probably need a rooted tablet | |||
* RenderScript - http://developer.android.com/guide/topics/renderscript/index.html OpenCL alternative available now? Ziilabs has opencl running. Renderscript seems to be nicer in that it allocates jobs to cores at run time (rather than fixing them) so can use multi-core as well as GPU but it can't currently use GPU even in ICS. It depends how long it takes Google to implement renderscript vs opencl. Ziilabs only does opencl for their chips I think.... | * RenderScript - http://developer.android.com/guide/topics/renderscript/index.html OpenCL alternative available now? Ziilabs has opencl running. Renderscript seems to be nicer in that it allocates jobs to cores at run time (rather than fixing them) so can use multi-core as well as GPU but it can't currently use GPU even in ICS. It depends how long it takes Google to implement renderscript vs opencl. Ziilabs only does opencl for their chips I think.... | ||
==Work in progress tutorial on inkscape== | ==Work in progress tutorial on inkscape== |
Latest revision as of 20:18, 23 December 2011
Contact
My mobile phone number can be found in my babbage share
Interests
- Reverse engineering the laser cutter software
- nodejs based social thingmy
- Augmented Reality
- Human-computer interface/wearable computers
- Random silly projects
- Cutting Stuff with lasers
- Odd low-level security software
- Machine Learning
- Weird Governance schemes
- Shiny things
- Viruses/synthetic biology
- Chemistry
Android Robotics
See cellbots for other peoples current work.
Why
Android phones/tablets will get stupidly cheap in the future. They also have a bunch of sensors/wireless comms and the ability to connect to random micro-controller based usb devices. They also have GPUs to give a bunch of parallel processing power for a low price, although that can't be accessed nicely at the moment.
They do compete somewhat with raspberry pis (raspberry pis being stripped down phones without the sensors/displays etc). I expect two ecosystems will emerge, with android phones and the OS being used for robots that need comms and lots of phone functionality, with micro controllers being used as for autonomic procedures and low latency apps. Raspberry PI and rooted phones with non-android roms will be used for situations where you need lots of processing power close to the metal.
Probably the best positioning for the android phone is in a cheap/easy robot? Perhaps based upon a RC car. The phone/tablet would act as brains and eyes/ears and the RC car the locomotion.
The camera would be used for depth mapping and object recognition. Use google apis for speech recognition?
Sensor Usage
- Accelerometer to check if it bumps into things
- Compass and accelerometer for dead reckoning to help slam
- Camera for depth mapping
- Camera for object recognition
- Microphone to detect problems with motors? Also movement commands?
Things to do
- Decide on a method taking advantage of multi-core and GPUs on android. This either means OpenCL, Render Script or hackily with openGL. OpenCL is more cross platform but Render Script is working on android today (at least for multicore). GLSL scripts would allow use of GPU today and be usable in rasberry pi (and GPUCV covers some of it already)
- Mod OpenCV for android using the above PP framework.
- Port of PCL for android with above considerations.
- Check the possibility of flashing the arduino/microcontroller attached to the android phone over the ADK link (would allow the phone to upload different low latency programs dependent upon task).
- If opencl can we implement opencl for android ourselves? In Render Script, NDK?
- Speech recognition using the PP framework
- SLAM/Fast slam using above technologies
- Machine Learning library
Why are we re-implementing lots of technologies? Because they are often based upon intel technologies such as SSE, which the ARM doesn't have. Lots of these things will be of dual purpose, not only for robotics, they will enable better augmented reality apps and google goggles type apps without having to use the server giving better latency and off network use.
On the plus side we can re-use parts of C code if we re-implement it in renderscript. More realistically:
- Intent to Ros message translation is some fashion.
- Learn renderscript- find simple CV projects to do
- Integral images for haar like feature detection?
- GPUcv has GLSL programs for CV? Will work on android? Maybe do pure java port? Integrateable with ROS? Should be.
- Too many questions marks?
Things to investigate now
- Rosjava - http://code.google.com/p/rosjava/
- I'm currently trying to figure out how easy/quick it would be to create a 1D depth map from a slice of video from an android phone. For doing mapping/navigation e.g. SLAM for a robot. It could probably do with a LED light as well to illuminate and create areas of contrast. For 2D stuff it seems like people detect features such as corners. For 1D stuff detecting edges and gradients should suffice. And be easy-ish. It might be possible to use orientation from the phone as information on which direction to look. Feature extraction would be parallel, feature comparison would be serial.
Things to investigate later
- Integration with Kinnect? Probably need a rooted tablet
- RenderScript - http://developer.android.com/guide/topics/renderscript/index.html OpenCL alternative available now? Ziilabs has opencl running. Renderscript seems to be nicer in that it allocates jobs to cores at run time (rather than fixing them) so can use multi-core as well as GPU but it can't currently use GPU even in ICS. It depends how long it takes Google to implement renderscript vs opencl. Ziilabs only does opencl for their chips I think....
Work in progress tutorial on inkscape
It is a bit of a pain in the ass to get inkscape to output something useful to laser cut if you are doing anything marginally complex.
Trace Bitmap is your friend
- Use A4 size. Some people swear by this to get the dimensions right.
- Objects such as Text are not exported. Convert them to paths Paths->Object to paths.
- Different bits of the svg are not aligned correctly. Select all then Path-> Combine to make it all one object. Then use a dxf editing program to put the different bits of the object in different layers. As the laser cutting software is not the best for doing this.