video encoding on device

by berliner » Thu, 15 Jul 2010 16:48:32 GMT


Sponsored Links
 Hi,

I'm working on a project whose objective is to show synthesized video messages 
to the user. The first approach was to do the video / image processing directly 
on the device. But I'm not sure anymore that this is good idea. The basic 
workflow for the service (and the activity that shows the final video) I want 
to create is to generate a set of images based on speech parameters, using a 
c++ library (AAM Library:  http://code.google.com/p/aam-library ). That works so 
far, takes a bit time but this step is not so time critical. For the 
presentation to the user I figured that it would need a video file in order to 
show a proper animation. At the beginning I did the image animation with the 
animation class on the java side, but since all images need to be decoded into 
bitmaps this is hitting the memory limit very fast (working on the emulator), 
after approximately 40 to 50 images. So that seems to be the wrong way. The 
next idea was to use ffmpeg to generate the video on the c / c++ side of the 
application. But I suppose (without experience) that using ffmpeg would hit 
memory limits as well.

What would be the alternatives to the described approach? Video processing on a 
server and streaming of the generated video to the device?
I would be thankful about any hints, experience or general suggestions.

best regards,
berliner

--



video encoding on device

by berliner » Fri, 16 Jul 2010 07:17:55 GMT


 Hi,

I'm working on a project whose objective is to show synthesized video messages 
to the user. The first approach was to do the video / image processing directly 
on the device. But I'm not sure anymore that this is good idea. The basic 
workflow for the service (and the activity that shows the final video) I want 
to create is to generate a set of images based on speech parameters, using a 
c++ library (AAM Library:  http://code.google.com/p/aam-library ). That works so 
far, takes a bit time but this step is not so time critical. For the 
presentation to the user I figured that it would need a video file in order to 
show a proper animation. At the beginning I did the image animation with the 
animation class on the java side, but since all images need to be decoded into 
bitmaps this is hitting the memory limit very fast (working on the emulator), 
after approximately 40 to 50 images. So that seems to be the wrong way. The 
next idea was to use ffmpeg to generate the video on the c / c++ side of the 
application. But I suppose (without experience) that using ffmpeg would hit 
memory limits as well.

What would be the alternatives to the described approach? Video processing on a 
server and streaming of the generated video to the device?
I would be thankful about any hints, experience or general suggestions.

best regards,
berliner

--


Sponsored Links


video encoding on device

by berliner » Fri, 16 Jul 2010 07:18:51 GMT


 Hi,

I'm working on a project whose objective is to show synthesized video
messages to the user. The first approach was to do the video / image
processing directly on the device. But I'm not sure anymore that this
is good idea. The basic workflow for the service (and the activity
that shows the final video) I want to create is to generate a set of
images based on speech parameters, using a c++ library (AAM Library:
 http://code.google.com/p/aam-library ). That works so far, takes a bit
time but this step is not so time critical. For the presentation to
the user I figured that it would need a video file in order to show a
proper animation. At the beginning I did the image animation with the
animation class on the java side, but since all images need to be
decoded into bitmaps this is hitting the memory limit very fast
(working on the emulator), after approximately 40 to 50 images. So
that seems to be the wrong way. The next idea was to use ffmpeg to
generate the video on the c / c++ side of the application. But I
suppose (without experience) that using ffmpeg would hit memory limits
as well.

What would be the alternatives to the described approach? Video
processing on a server and streaming of the generated video to the
device?
I would be thankful about any hints, experience or general
suggestions.

best regards,
berliner

--



Other Threads

1. Android on tivo

Ok, I posted this on phandroid and nobody seemed able to answer so I
thought I would throw it to the pros...

"Alright phellow Android phans. I pose the following scenario: I have
a tivo with a busted HD. I love android. I port 2.1 for my tivo,
install and live in stuperful Android gluttony. Possible? What are
your thoughts?"

-- 

2. how does two native shared library communicated?




Asking the same question again is not likely to get more help.

As I said, there is a fair amount of material in the SDK documentation on
this.  Go there first.  (For example, look at Service, and the RemoteService
API demo.)

If you still need help, you will need to give more information about what
you are trying to do and what about the facilities that are documented is
not sufficient for you are doing.

-- 
Dianne Hackborn
Android framework engineer
hack...@android.com

Note: please don't send private questions to me, as I don't have time to
provide private support, and so won't reply to such e-mails.  All such
questions should be posted on public forums, where I and others can see and
answer them.

-- 

3. Creating Custom Soft keyboard

4. Sony Ericsson launches the Idea Sharing Community SE-dot

5. how can i write a Broadcast Receiver that will be invoked when user clicks on any application icon.

6. About android.permission.INJECT_EVENTS

7. Problem when trying to on Bluetooth under Beagleboard