Controlling an Android device via additional UIs

by kenpark » Fri, 29 May 2009 17:49:23 GMT


Sponsored Links
 I want Android (or Android devices) to become able to be controlled
via additional, external user interfaces. like a kind of remote
interface. In other words, i want to design a software-interface which
enables Android devices to make use of a second system's UIs. I
imagine it like that:
For example you have your Android device (let's call it the first
system) and a regular PC (let's call this the second system). So in
this case the second system has much more comfortable user interfaces
(like it's hardware keyboard) for performing tasks like writing a text
message. Now I want the first device -- the Android phone -- to
connect (e.g. via wireless lan) with the second system. Now the second
system provides representation (views and so on) that are able to
delegate the user's interaction to the Android device where this is
than computed by the current running activity (displayed on the
external UIs). The representation, provided by the second system could
be optimized to the second system's UI's. That is, e.g. the views are
presented bigger because the bigger screen of the PC makes it possible
to enlarge the presentation of a view.

But I wonder on which level of the framework stack one has to make
changes to remote the user interface also to an external system's UI?!
So far I was thinking of two possible solutions:

A.
The second system (e.g. the PC) has a proxy presentation. This is, for
example if a button is presented on the Android device, than there is
also a proxy for this button, presented on the PC's screen. If the
user activates the proxy-button on the PC, than this event is making a
Remote Procedure Call to the android device, activating the android
device's button. This could be implemented for example by using RMI
techniques. (I know RMI is currently not supported in the stack, but I
guess these libs could be added to the stack.)

B.
On his Google I/O talk titled "Anatomy & Physiology of an Android",
Patrick Brady was talking of the "Surface Manager" on Android's native
libraries level. I thought that editing this functionality could also
be an approach. But first, I haven't found any documentation on this
so called "Surface Manager", and second, I am not sure if this is the
right point if you want to offer external UIs which are optimized on
things like size and layout.

Since I am a total newbie on Android, I am interested in the framework-
developers thoughts about my idea.
Any hint, help and critic is welcome.

Regards,
Patrick
--~--~---------~--~----~------------~-------~--~----~



Other Threads

1. How to send andio file to voice channel during call

Hi,

As you know, during the call, people speaks, the listener receive the
voice.  But I have a special requirement that I want to send the music
to the listener directly during the call.

It requires Android have a interface to receive the voice and send
voice by TCH(Traffic Channel).

Do you know how to make it happens?



--~--~---------~--~----~------------~-------~--~----~

2. Question about stop sequence in Donut OpenCORE's camera MIO

In Donut, OpenCORE AndroidCameraInput's DoStop() stops recording in
the following sequence:

    mCamera->setListener(NULL);
    mCamera->stopRecording();
    ReleaseQueuedFrames();

However, during the period between clearing listener and calling
stopRecording(), buffers sent from camera to AndroidCameraInput will
be lost, i.e. not received by listener and thus won't be sent back to
camera through releaseRecordingFrame(). If camera keeps waiting for
all buffers coming back from OpenCORE, it might hang there. Would it
be a problem? Shall we move stopRecording() before setLIstener(NULL)?

Thanks

--~--~---------~--~----~------------~-------~--~----~

3. AppAccelerator for Android and iPhone

4. rotation of phone

5. Struggling with google maps

6. How to add scrollview to linearlayout programtically.

7. Chart API