Inserting a contact / Android architecture

by brnzn » Thu, 01 Jan 2009 08:06:22 GMT


Sponsored Links
 I have a question on inserting a contact, but it's the general
principle I'm most interested in.  I'd really love to hear from one of
the Android Engineers on this, because currently I find some of the
design decisions in Android perplexing, and I'd like to understand why
it is as it is.

The most direct way to insert a contact seems to be to use
Contacts.People.createPersonInMyContactsGroup(...).  Something like
this:

        ContentValues values = new ContentValues();
        values.put(People.NAME, "Bernie");

        Contacts.People.createPersonInMyContactsGroup(
                        getContentResolver(),
                        values);

We're constructing the person entity as a map.  This brings up a few
potential problems:
1. How do we know what the keys should be?  By convention Android's
own classes (such as People) declare static constants to tell us what
those keys should be, but this is only a convention.
2. How do we know what datatype is appropriate for a given key?
There's nothing to stop you or warn you when you push the wrong type
of data in.
3. How do we know what combination of keys is valid?  If a given
entity has mandatory properties, there's nothing to stop you or warn
you if you forget to set some of those mandatory properties.
4. We're potentially allocating two objects for every value - one for
the value itself and one for the key.  Where we use static constants,
this is less of an issue.

Why does the API not provide a Person object that we can populate and
then pass in to be persisted?  Perhaps something like this:

        Person p = new Person("Bernie");
        p.addGroupMembership("My Contacts");

        getContentResolver().insert(Person.CONTENT_URI, p);

The Person class tells clearly us:
1. What values can be set (by the existence of bean properties)
2. What datatype each property is
3. What values are are mandatory (probably by virtue of them being
required in constructor calls)

Is it a classloading issue?  I'm guessing no because we could easily
overcome that by making Person Serializable or Parcelable, splitting
it out into a JAR to be linked into whatever app needs it.

So what then is the philosophy that has driven the architecture to be
this way?


brnzn

--~--~---------~--~----~------------~-------~--~----~



Inserting a contact / Android architecture

by Mark Murphy » Thu, 01 Jan 2009 08:45:44 GMT


 


I can't speak for the core Android team, so I cannot say specifically
why they made the design decision.

However, given their repeated insistence that they will not break binary
interfaces with Android OS updates, my guess is that's at least part of
their rationale. The definition of a "contact" has potential to change
significantly. They may feel that a more generic interface will help
them expand the scope of contacts without changing existing uses of the API.

To draw an analogy, it's a variant on the strong-vs.-weak typing
argument you see when Rubyists and Javanauts square off. Strong typing
has definite benefits, in terms of compile-time validations and
potential for optimization. However, strong typing can also "get in the
way", which is why some folk prefer weakly-typed or type-inferred languages.

Anyway, that's my guess. Take it with a grain of salt. Preferably a
large grain of salt, maybe one cubic foot or so... ;-)

-- 
Mark Murphy (a Commons Guy)
 http://commonsware.com 
_The Busy Coder's Guide to Android Development_ Version 1.9 Published!

--~--~---------~--~----~------------~-------~--~----~


Sponsored Links


Other Threads

1. AV sync issue in real media porting

I am porting Real Media on Android, and Please let me know the
following

1) Can I share some global variables between the parser interface
file( fileformats/rm/parser/src/irmff.cpp)
    and the omx_ra_component.cpp. If so, how can I share?

2) After seek, can I drop audio frames before giving to renderer at
media_output_inport.
   based on the targetNPT of the seek , and the actual PTS of the
audio packet recieved after seek.




Here is some detail of the issue while porting:




I have integrated Real Media (RM) to Eclair. In real media files, the
audio codec can be one of the 2 formats, AAC or G2COOK (also called as
Real Media 8 Low Bit rate, this is a constant bit rate codec, while
AAC is VBRcodec).

During Normal playback, there is AV sync in all real media files
irrespective of the audio format used is AAC or G2COOK, where as if I
seek, there is AV sync in all real media files with AAC as audio but
there is AV sync mismatch in all the real media files with G2COOK as
audio. Most of the times, audio will be heard ahead of video.

One of the differences between AAC and G2COOK formats is, if the audio
format used is AAC, the RM audio packet will have multiple encoded
audio frames and this packet can be decoded by the decoder
independently.

Where as if the audio format used is G2COOK, the RM audio packets are
interleaved upto a factor of 30 packets, and the decoder needs to
buffer all the 30 packets before it deinterleves all the packets and
decodes all the encoded frames in each of those packets.

The first of these 30 packets is called a audio key frame.So,
essentially after repositioning (seeking ) the file, I need to  look
for the closed audio keyframe as well as video keyframe around the
seeked position and return the PTS of these 2 packets to the player
upon request.

One observation on the PTS of AV keyframe packets after seek in COOK
audio is
TargetNPT = 27500 ms
Audio PTS = 25000 ms
Video PTS = 20500 ms

While PTS of AV keyframe packets after seek in AAC audio is
TargetNPT = 27500 ms
Audio PTS = 27430 ms
Video PTS = 20500 ms

where there is a difference of as much as 6-7 secs between the
keyframes of AV streams.

At the OMX_ra_component.cpp, whenever I recieve the audio keyframe, I
just do a memcpy of the packet in decoder's internal memory and give
back input buffer size as totally consumed while output buffer size
decoded as zero. This is done till I recieve the 29 packets, and after
recieving the last frame of this audio block, while I again send the
input buffer size as totally consumed, I will now send the total
decoded samples of the 30 audio packets recieved till now.

While the AV sync mismatch right after the seek, there is the same
constant delay till the EOF between audio and video. Always audio
played ahead. Any pointers to resolve the issue is highly appreciated.

Here are some pointers I got from helix community:

Here is the explaination I got from real media people...regarding the
AV Sync issue

""In case of cook audio the timestamp of first packet (audio keyframe)
of the superblock after seek will be always be ahead of the playback
clock, when compared to that of aac where every packet is a key
frame.

Helix player is aware of this and hence after decoding the entire
superblock the client engine will clip the decoded buffer at the
begining before  giving it to the renderer.

The number of frames of decoded audio that will be clipped will be
calculated using the sampling rate of the audio , play duration of the
current decoded super block as well as the difference between PTS of
the first packet of superblock with refrence to the actual playback
clock.""


Any of the PV experts , please let me know how can I drop the decoded
audio before giving it to the renderer in opencore, as at the omx
component level, after decoding, I do not have the information of
playback clock and the hence the diff of audio packet w.r.t the
playback clock.

-- 
unsubscribe: android-porting+unsubscr...@googlegroups.com
website: 

2. TextView cuts off part of an initial capital "J"

This may result from its background property

2010/5/4 bwin <bjwings...@gmail.com>





> > > 

3. AIDL: Stub class methods not being called

4. change booting Monkey image.

5. How to create the same activity in a new process?

6. Question concerning View creation

7. what happens exactly when android goes to sleep?