video encoding on device

by berliner » Thu, 15 Jul 2010 16:48:32 GMT


Sponsored Links
 Hi,

I'm working on a project whose objective is to show synthesized video messages 
to the user. The first approach was to do the video / image processing directly 
on the device. But I'm not sure anymore that this is good idea. The basic 
workflow for the service (and the activity that shows the final video) I want 
to create is to generate a set of images based on speech parameters, using a 
c++ library (AAM Library:  http://code.google.com/p/aam-library ). That works so 
far, takes a bit time but this step is not so time critical. For the 
presentation to the user I figured that it would need a video file in order to 
show a proper animation. At the beginning I did the image animation with the 
animation class on the java side, but since all images need to be decoded into 
bitmaps this is hitting the memory limit very fast (working on the emulator), 
after approximately 40 to 50 images. So that seems to be the wrong way. The 
next idea was to use ffmpeg to generate the video on the c / c++ side of the 
application. But I suppose (without experience) that using ffmpeg would hit 
memory limits as well.

What would be the alternatives to the described approach? Video processing on a 
server and streaming of the generated video to the device?
I would be thankful about any hints, experience or general suggestions.

best regards,
berliner

--



video encoding on device

by berliner » Fri, 16 Jul 2010 07:17:55 GMT


 Hi,

I'm working on a project whose objective is to show synthesized video messages 
to the user. The first approach was to do the video / image processing directly 
on the device. But I'm not sure anymore that this is good idea. The basic 
workflow for the service (and the activity that shows the final video) I want 
to create is to generate a set of images based on speech parameters, using a 
c++ library (AAM Library:  http://code.google.com/p/aam-library ). That works so 
far, takes a bit time but this step is not so time critical. For the 
presentation to the user I figured that it would need a video file in order to 
show a proper animation. At the beginning I did the image animation with the 
animation class on the java side, but since all images need to be decoded into 
bitmaps this is hitting the memory limit very fast (working on the emulator), 
after approximately 40 to 50 images. So that seems to be the wrong way. The 
next idea was to use ffmpeg to generate the video on the c / c++ side of the 
application. But I suppose (without experience) that using ffmpeg would hit 
memory limits as well.

What would be the alternatives to the described approach? Video processing on a 
server and streaming of the generated video to the device?
I would be thankful about any hints, experience or general suggestions.

best regards,
berliner

--


Sponsored Links


video encoding on device

by berliner » Fri, 16 Jul 2010 07:18:51 GMT


 Hi,

I'm working on a project whose objective is to show synthesized video
messages to the user. The first approach was to do the video / image
processing directly on the device. But I'm not sure anymore that this
is good idea. The basic workflow for the service (and the activity
that shows the final video) I want to create is to generate a set of
images based on speech parameters, using a c++ library (AAM Library:
 http://code.google.com/p/aam-library ). That works so far, takes a bit
time but this step is not so time critical. For the presentation to
the user I figured that it would need a video file in order to show a
proper animation. At the beginning I did the image animation with the
animation class on the java side, but since all images need to be
decoded into bitmaps this is hitting the memory limit very fast
(working on the emulator), after approximately 40 to 50 images. So
that seems to be the wrong way. The next idea was to use ffmpeg to
generate the video on the c / c++ side of the application. But I
suppose (without experience) that using ffmpeg would hit memory limits
as well.

What would be the alternatives to the described approach? Video
processing on a server and streaming of the generated video to the
device?
I would be thankful about any hints, experience or general
suggestions.

best regards,
berliner

--



Other Threads

1. Few random question about the android mechanics.

Hi All,

I'm trying to figure out few, perhaps easy!, things -

1) How do I figure out what event(s) triggered invocation of onDraw()
method of a view?. I've a canvas with a button, std. app menu, and I
seem to get 3 calls to onDraw() without even touching ( meaning
clicking the mouse) on emulator's screen. My guess is that it is
dependent on the number of elements ( like control, txt etc. i.e
probably the number of child widgets) on the view !!

2) If I touch the screen with my fat finger, what x,y coordinate I
will get in touch event?. It seems like that it is no longer a point
event, but a whole bunch of pixel points?

thanks
-pro

--~--~---------~--~----~------------~-------~--~----~

2. EclipseIDE for Java Developers galileo install plug in problem

The 2 problems are completely unrelated.

Yes, the plug-ins are not signed. Just install them anyway.

Sometimes, when you create a project, the java builder doesn't see the
new gen folder. Either clean your project or do a minor edit in the
java source code. It should recompile your project without errors.

Xav






-- 
Xavier Ducrohet
Android SDK Tech Lead
Google Inc.

--~--~---------~--~----~------------~-------~--~----~

3. Stepping Through Video/Animation Frames

4. Location doesn't have a bundle with satellite count any more?

5. Updating permissions for apps that modify sharedUserId

6. ADC2 Round One Judging Complete

7. Emulator file transfer