Forums | developer.brewmp.com Forums | developer.brewmp.com

Developer

Forums

Forums:

Ok so I found out what my problem was with the whole menu thing on my T720...I ran out of memory. But here's the thing...it's all in the device native bitmaps. If I load up an 8k BMP with LoadResData, it takes up 8k. But when I run CONVERTBMP on it, I swear it allocates like 30-40k. Why do device native bitmaps take up so much memory? This is ridiculous.

I dunno why I never noticed this ridiculous memory consumption before.

I know why!
And it's really stupid too.
Ok, so check it out. I was loading a 130x130 4-bit bitmap. That's about 8k. When I loaded it from the resource file, that consumed 8k like it should.
But, when I ran CONVERTBMP on it, it balooned to 33k.
Wanna know why?
Well, the T720 has a 12-bit screen. But I'm sure each color is padded by 4 bytes...so the colors in the frame buffer really stored as 16-bit values.
130x130x16bits = 33k or so.
So, the device native image format is 16 BIT!!!!!!!!!!!!!
AAAAAAAAARGH!
So you either use the painfully slow IIMage interface to draw, or deal with fast OBSCENELY BLOATED native images.
I probably never noticed it before because I used to use the Z800 a lot and device native is most liely 8-bit on that device. 2X larger than the 4-bit images I use, but easy to overlook.
Great. This totally sucks.
I'll have to note that in the next revision of my book. :)

I know why!
And it's really stupid too.
Ok, so check it out. I was loading a 130x130 4-bit bitmap. That's about 8k. When I loaded it from the resource file, that consumed 8k like it should.
But, when I ran CONVERTBMP on it, it balooned to 33k.
Wanna know why?
Well, the T720 has a 12-bit screen. But I'm sure each color is padded by 4 bytes...so the colors in the frame buffer really stored as 16-bit values.
130x130x16bits = 33k or so.
So, the device native image format is 16 BIT!!!!!!!!!!!!!
AAAAAAAAARGH!
So you either use the painfully slow IIMage interface to draw, or deal with fast OBSCENELY BLOATED native images.
I probably never noticed it before because I used to use the Z800 a lot and device native is most liely 8-bit on that device. 2X larger than the 4-bit images I use, but easy to overlook.
Great. This totally sucks.
I'll have to note that in the next revision of my book. :)

Well, it's a trade off. Using 16 bits to hold 12 bits of data is not space efficient, but it makes blits faster. No matter where you blit, it's just a matter of doing a memcpy for each line. But if the pixels were packed, the blit implementation would have to do shifting and masking when the source and destination are not lined up in the right way.
In BREW 2.0, you can load a bitmap into a DIB (which does no conversion) and blit that DIB directly to the device bitmap, thereby choosing space efficiency over speed. So BREW 2.0 addresses the problem, at least partially, but you're still stuck with whatever DDB format the OEM chooses. (I realize you usually don't have much choice about what version of BREW you target.)

Well, it's a trade off. Using 16 bits to hold 12 bits of data is not space efficient, but it makes blits faster. No matter where you blit, it's just a matter of doing a memcpy for each line. But if the pixels were packed, the blit implementation would have to do shifting and masking when the source and destination are not lined up in the right way.
In BREW 2.0, you can load a bitmap into a DIB (which does no conversion) and blit that DIB directly to the device bitmap, thereby choosing space efficiency over speed. So BREW 2.0 addresses the problem, at least partially, but you're still stuck with whatever DDB format the OEM chooses. (I realize you usually don't have much choice about what version of BREW you target.)

Do you know if the Java T720i does this conversion behind the scenes? The T720i seems faster than the T720c (BREW version). But I'm wondering if it converts everything to device native when you create an Image....or if the Java version magirifically converts stuff on the fly but very fast.

Do you know if the Java T720i does this conversion behind the scenes? The T720i seems faster than the T720c (BREW version). But I'm wondering if it converts everything to device native when you create an Image....or if the Java version magirifically converts stuff on the fly but very fast.

Quote:Originally posted by flarb
Do you know if the Java T720i does this conversion behind the scenes?
I don't know.

Quote:Originally posted by flarb
Do you know if the Java T720i does this conversion behind the scenes?
I don't know.

I looked online and it appears the Java version of the T720 has some advanced hardware. Can anyone else confirm this.

I looked online and it appears the Java version of the T720 has some advanced hardware. Can anyone else confirm this.

Kevin,
Not sure if this is what you're looking for, but this is from the T720i spec sheet available over at Motorola:
Double Buffering ----------------------------- Supported.
That would make it much more graphic intensive app/game developer friendly. ; )
RC

Kevin,
Not sure if this is what you're looking for, but this is from the T720i spec sheet available over at Motorola:
Double Buffering ----------------------------- Supported.
That would make it much more graphic intensive app/game developer friendly. ; )
RC

So this must be the same issue I'm experiencing when running a 1.1 compiled app on the 2.1 emulator LG 6000. After a call to CONVERTBMP(), a 4-bit image appears to need the memory that a 16-bit image of the same dimensions would use; the LG 6000 LCD has a 16-bit color depth.
Is there any way around this, or is it too much to ask for a 4-bit image to take the memory that a 4-bit image should need? Talk about greedy.

So this must be the same issue I'm experiencing when running a 1.1 compiled app on the 2.1 emulator LG 6000. After a call to CONVERTBMP(), a 4-bit image appears to need the memory that a 16-bit image of the same dimensions would use; the LG 6000 LCD has a 16-bit color depth.
Is there any way around this, or is it too much to ask for a 4-bit image to take the memory that a 4-bit image should need? Talk about greedy.

you think this might be related to my problem?
http://brewforums.qualcomm.com/showthread.php?s=&threadid=2076

you think this might be related to my problem?
http://brewforums.qualcomm.com/showthread.php?s=&threadid=2076

Is your code using CONVERTBMP() somewhere to convert your emoicon images to native format? If so, I'd guess you're experiencing a similar problem.
I've just sent an email to brew tech support to try and get a definitive answer on this issue - you know, is this what's happening, and is there any way to fix it, etc. I'll post what I hear once they write back to me. :)

Is your code using CONVERTBMP() somewhere to convert your emoicon images to native format? If so, I'd guess you're experiencing a similar problem.
I've just sent an email to brew tech support to try and get a definitive answer on this issue - you know, is this what's happening, and is there any way to fix it, etc. I'll post what I hear once they write back to me. :)

i don't use convert bitmap in my code...but maybe brew does on one of the API calls.
I've also emailed Tech and sent them the link to my thread aswell.

i don't use convert bitmap in my code...but maybe brew does on one of the API calls.
I've also emailed Tech and sent them the link to my thread aswell.

Nah; if you don't use CONVERTBMP explicitly, it doesn't get used. As an alternative, you can just use IImage. It's slower, but it doesn't seem to do the silly conversion.

Nah; if you don't use CONVERTBMP explicitly, it doesn't get used. As an alternative, you can just use IImage. It's slower, but it doesn't seem to do the silly conversion.

Quote:Originally posted by Bekenn
Nah; if you don't use CONVERTBMP explicitly, it doesn't get used. As an alternative, you can just use IImage. It's slower, but it doesn't seem to do the silly conversion.
In my code (poster on other thread) i use IImage and i still run into a huge memory problem. My problem comes when using one of the IMENU's with images on them.

Quote:Originally posted by Bekenn
Nah; if you don't use CONVERTBMP explicitly, it doesn't get used. As an alternative, you can just use IImage. It's slower, but it doesn't seem to do the silly conversion.
In my code (poster on other thread) i use IImage and i still run into a huge memory problem. My problem comes when using one of the IMENU's with images on them.

Quote:Originally posted by John Jacecko
So this must be the same issue I'm experiencing when running a 1.1 compiled app on the 2.1 emulator LG 6000. After a call to CONVERTBMP(), a 4-bit image appears to need the memory that a 16-bit image of the same dimensions would use; the LG 6000 LCD has a 16-bit color depth.
Is there any way around this, or is it too much to ask for a 4-bit image to take the memory that a 4-bit image should need? Talk about greedy.
The whole point of CONVERTBMP() is to convert a bitmap to the native format, so of course you will end up with an bitmap that takes four times the memory when running on a device with a 16-bit display.
In BREW 2.0 and later, you can load your bitmap with ISHELL_LoadResBitmap() or ISHELL_LoadBitmap() (which do no conversion at all) and blit this directly to the device bitmap with IDISPLAY_BitBlt().

Quote:Originally posted by John Jacecko
So this must be the same issue I'm experiencing when running a 1.1 compiled app on the 2.1 emulator LG 6000. After a call to CONVERTBMP(), a 4-bit image appears to need the memory that a 16-bit image of the same dimensions would use; the LG 6000 LCD has a 16-bit color depth.
Is there any way around this, or is it too much to ask for a 4-bit image to take the memory that a 4-bit image should need? Talk about greedy.
The whole point of CONVERTBMP() is to convert a bitmap to the native format, so of course you will end up with an bitmap that takes four times the memory when running on a device with a 16-bit display.
In BREW 2.0 and later, you can load your bitmap with ISHELL_LoadResBitmap() or ISHELL_LoadBitmap() (which do no conversion at all) and blit this directly to the device bitmap with IDISPLAY_BitBlt().

Thanks, Mark. It looks like I'll just have to work around the BREW 1.1 limitation of only being able to blit from bitmaps that have been run through CONVERTBMP() and the resulting increased memory requirements.

Thanks, Mark. It looks like I'll just have to work around the BREW 1.1 limitation of only being able to blit from bitmaps that have been run through CONVERTBMP() and the resulting increased memory requirements.

I suspect Java's "magic" is a decently implemented cache hiding behind those pretty Image.createImage() and Graphics.drawImage() calls. Works like a charm in BREW too, and as a software component it's easily abstracted into a reusable component.
I've got one now that, on a cache miss, reloads a zlib compressed BMP from file, uncompresses it, then CONVERTBMP()s it. Carefully implemented, it pays for itself both in space *and* time and it opens the door for apps with many times more assets than can fit in RAM.

I suspect Java's "magic" is a decently implemented cache hiding behind those pretty Image.createImage() and Graphics.drawImage() calls. Works like a charm in BREW too, and as a software component it's easily abstracted into a reusable component.
I've got one now that, on a cache miss, reloads a zlib compressed BMP from file, uncompresses it, then CONVERTBMP()s it. Carefully implemented, it pays for itself both in space *and* time and it opens the door for apps with many times more assets than can fit in RAM.

John,
Whether you use sdk 1.1 or 2.x, it always going to be up to you to determine the trade off. One way will conserve memory at the expense of speed, the other will be fast at the expense of memory. There simply is no way around this, there never has been and there never will be. The only exception is when your display's native pixel storage size equals your bitmap's pixel storage size, there is no padding / pitch on either one and no format conversion nor color plane reordering is required. That being unlikely, functions either explicitly called or not or the hardware will do the work.
-Mickey Portilla

John,
Whether you use sdk 1.1 or 2.x, it always going to be up to you to determine the trade off. One way will conserve memory at the expense of speed, the other will be fast at the expense of memory. There simply is no way around this, there never has been and there never will be. The only exception is when your display's native pixel storage size equals your bitmap's pixel storage size, there is no padding / pitch on either one and no format conversion nor color plane reordering is required. That being unlikely, functions either explicitly called or not or the hardware will do the work.
-Mickey Portilla

Thanks, Mickey.

Thanks, Mickey.

Sounds like a nifty solution.
Is the zlib utility shareable?

Sounds like a nifty solution.
Is the zlib utility shareable?

What do you mean with shareable? It is available as open source under the GNU license, if that's what you mean.

What do you mean with shareable? It is available as open source under the GNU license, if that's what you mean.

Dragon is right: the reference implementation of zlib is available under the LGPL at http://www.gzip.org/zlib/.
In an attempt to work around the LGPL, which my employer is afraid of, and being sensitive to BREW's memory limitations, I tried writing my own implementation from the specs at ftp://swrinde.nde.swri.edu/pub/png/documents/zlib/rfc-zlib.html and ftp://swrinde.nde.swri.edu/pub/png/documents/zlib/rfc-deflate.html but it's way too slow for "on-the-fly" decoding and unfortunately I can't share it (being alienated from the fruits of my labors as I am).
Here's one way I've spun gold out of straw on the space-time tradeoff. We also have another custom compression format which is simple (bytewise, not bitwise) LZ77 over run-length-encoding. This is similar to ZLIB, but without the Huffman encoding. A talented coworker of mine put that one together. On our typical BMPs it "only" gives us 50% file size reductions (we get ~70% from ZLIB), but the decompressor is almost as fast as CONVERTBMP on most phones.
Again, I can't share the code. All of this is build-up to the tiny bit of real helpfuless I can offer.
We also have a custom BAR format which includes a byte for each image which indicates *which* compression was used; none, ZLIB, or RLE-LZ77. When we build the BAR, we have a template file that includes a parameter for each BMP indicating how it should be stored.
This way we can fine tune our compression based on the way each asset is used in the app. For instance, our splash screens and menu art tend to be really big but only get used at app startup or when we don't really need to be "real time": they get ZLIB. Our in-game art, on the other hand, needs to be really fast in case there's a cache miss in-game. We use RLE-LZ77 on the larger ones of those, and nothing for the small art.
Hope this helps
-Jesse

Dragon is right: the reference implementation of zlib is available under the LGPL at http://www.gzip.org/zlib/.
In an attempt to work around the LGPL, which my employer is afraid of, and being sensitive to BREW's memory limitations, I tried writing my own implementation from the specs at ftp://swrinde.nde.swri.edu/pub/png/documents/zlib/rfc-zlib.html and ftp://swrinde.nde.swri.edu/pub/png/documents/zlib/rfc-deflate.html but it's way too slow for "on-the-fly" decoding and unfortunately I can't share it (being alienated from the fruits of my labors as I am).
Here's one way I've spun gold out of straw on the space-time tradeoff. We also have another custom compression format which is simple (bytewise, not bitwise) LZ77 over run-length-encoding. This is similar to ZLIB, but without the Huffman encoding. A talented coworker of mine put that one together. On our typical BMPs it "only" gives us 50% file size reductions (we get ~70% from ZLIB), but the decompressor is almost as fast as CONVERTBMP on most phones.
Again, I can't share the code. All of this is build-up to the tiny bit of real helpfuless I can offer.
We also have a custom BAR format which includes a byte for each image which indicates *which* compression was used; none, ZLIB, or RLE-LZ77. When we build the BAR, we have a template file that includes a parameter for each BMP indicating how it should be stored.
This way we can fine tune our compression based on the way each asset is used in the app. For instance, our splash screens and menu art tend to be really big but only get used at app startup or when we don't really need to be "real time": they get ZLIB. Our in-game art, on the other hand, needs to be really fast in case there's a cache miss in-game. We use RLE-LZ77 on the larger ones of those, and nothing for the small art.
Hope this helps
-Jesse

um the licence looks more like BSD style to me.
http://www.gzip.org/zlib/zlib_license.html
gzip on the other hand is GPL'ed. Stupid Linux'ers!
OpenBSD has a Version of compress/decompress Under the BSD licence which uses zlib.
zlib RFC's:
ftp://ds.internic.net/rfc/rfc1950.txt
(zlib format), rfc1951.txt (deflate format) and rfc1952.txt (gzip format).
correct me if im wrong.
edit: OpenBSD-current has BSD licence
Replacement of GNU diff(1), diff3(1), grep(1), egrep(1), fgrep(1), zgrep(1), zegrep(1), zfgrep(1), gzip(1), zcat(1), gunzip(1), gzcat(1), zcmp(1), zmore(1), zdiff(1), zforce(1), gzexe(1), and znew(1) commands with BSD licensed equivalents.
which might be helpfull to you guys. Support the project and buy 3.4 when it comes out (nov.1) :P

um the licence looks more like BSD style to me.
http://www.gzip.org/zlib/zlib_license.html
gzip on the other hand is GPL'ed. Stupid Linux'ers!
OpenBSD has a Version of compress/decompress Under the BSD licence which uses zlib.
zlib RFC's:
ftp://ds.internic.net/rfc/rfc1950.txt
(zlib format), rfc1951.txt (deflate format) and rfc1952.txt (gzip format).
correct me if im wrong.
edit: OpenBSD-current has BSD licence
Replacement of GNU diff(1), diff3(1), grep(1), egrep(1), fgrep(1), zgrep(1), zegrep(1), zfgrep(1), gzip(1), zcat(1), gunzip(1), gzcat(1), zcmp(1), zmore(1), zdiff(1), zforce(1), gzexe(1), and znew(1) commands with BSD licensed equivalents.
which might be helpfull to you guys. Support the project and buy 3.4 when it comes out (nov.1) :P

My bad: j0rd is right.

My bad: j0rd is right.

Pardon me for jumping on this thread - I'm a newbie reading up on stuff before I get started next week.
What exactly do you mean by "cache" in this case ? Is it like you only have n images in native format at a time and convert new ones into that area over an older one, sorta thing ?
I have a pretty decent sprite-compression system left-over from my PocketPC library, but I'm not sure what to do with it. If I put these .PRI files into the bar as custom resources, I can then decode em myself when I need them, right ? What then though - do I need to decode into a bitmap then call convertbitmap on that, or is there a way to go direct ?
I guess what I'm really hoping for is some struct somewhere that defines the bit-depth, strides and whatever else are needed for the display in question so I can write my own sprites routine to work from source directly. Is that doable ?
I really would like to use my compressed sprites, but even more than that I want some translucency effects! I know these are read-modify-write but it's still nice to put a few pixels of it in the right places :)
Thanks.....

Pardon me for jumping on this thread - I'm a newbie reading up on stuff before I get started next week.
What exactly do you mean by "cache" in this case ? Is it like you only have n images in native format at a time and convert new ones into that area over an older one, sorta thing ?
I have a pretty decent sprite-compression system left-over from my PocketPC library, but I'm not sure what to do with it. If I put these .PRI files into the bar as custom resources, I can then decode em myself when I need them, right ? What then though - do I need to decode into a bitmap then call convertbitmap on that, or is there a way to go direct ?
I guess what I'm really hoping for is some struct somewhere that defines the bit-depth, strides and whatever else are needed for the display in question so I can write my own sprites routine to work from source directly. Is that doable ?
I really would like to use my compressed sprites, but even more than that I want some translucency effects! I know these are read-modify-write but it's still nice to put a few pixels of it in the right places :)
Thanks.....

First of all, yes, you can use your proprietary format. Brew uses the standard BITMAPHEADER structure that you also find in Windows to work with its non-native bitmaps. So, anything you can do with non-native bitmaps you can also do with your images in Brew.
However, you may want to keep in mind that Brew phones are very *weak* compared to other platforms, such as the PPC or the MS Smartphone. Even in an optimized environemt you are effectively limited to framerates around 10fps on most phones, some of them much less than that, depending on the handset. With that in mind translucency etc. - while possible - is usually nothing you can use for anything but static screens.

First of all, yes, you can use your proprietary format. Brew uses the standard BITMAPHEADER structure that you also find in Windows to work with its non-native bitmaps. So, anything you can do with non-native bitmaps you can also do with your images in Brew.
However, you may want to keep in mind that Brew phones are very *weak* compared to other platforms, such as the PPC or the MS Smartphone. Even in an optimized environemt you are effectively limited to framerates around 10fps on most phones, some of them much less than that, depending on the handset. With that in mind translucency etc. - while possible - is usually nothing you can use for anything but static screens.

Quote:Originally posted by Applewood
What exactly do you mean by "cache" in this case ? Is it like you only have n images in native format at a time and convert new ones into that area over an older one, sorta thing ?
Pretty much, except unfortunately CONVERTBMP() doesn't let me choose where the native format versions live: usually they're system handles. So when I purge space from the cache to make space for something I'm converting in response to a cache miss, I'm usually releasing a system resource.
This is unfortunate because, particularly for fragmentation reasons, I'd much rather be able to manage that space myself.
Also, my cache is based on "memory used" instead of "number of images" since there is so much variation in size from image to image.
Quote:
I have a pretty decent sprite-compression system left-over from my PocketPC library, but I'm not sure what to do with it. If I put these .PRI files into the bar as custom resources, I can then decode em myself when I need them, right ? What then though - do I need to decode into a bitmap then call convertbitmap on that, or is there a way to go direct ?
Unfortunately, in order to use IDISPLAY_BitBlt() you have to use CONVERTBMP(). There is no CONVERTPNG() or CONVERTJPG() or anything, unfortunately, and IIMAGE doesn't support transparency or blting subrects of the source image.
So you can use a custom format, but at some point you have to convert it to BMP, pass that to CONVERTBMP, then pass that result to IDISPLAY_BitBlit().
Quote:
I guess what I'm really hoping for is some struct somewhere that defines the bit-depth, strides and whatever else are needed for the display in question so I can write my own sprites routine to work from source directly. Is that doable ?
Those properties of the device native formats are undocumented. What's worse, the space consumed by CONVERTBMP() is usually 4-8 times that of the original image on many devices. I would LOVE to be able to implement my own tight RLE drawing routines, but short of reverse engineering each device's format, you're stuck with CONVERTBMP and BMP sources.
In my app I stick compressed BMPs in my resource file (which is custom but you could do this with the BREW Resource Editor), then uncompress them, then CONVERTBMP. The uncompressed-but-not-CONVERTED buffer is reused by lots of other stuff, so my cache purge is just of the CONVERTBMP handles.
Sorry for the bad news.
-Jesse

Quote:Originally posted by Applewood
What exactly do you mean by "cache" in this case ? Is it like you only have n images in native format at a time and convert new ones into that area over an older one, sorta thing ?
Pretty much, except unfortunately CONVERTBMP() doesn't let me choose where the native format versions live: usually they're system handles. So when I purge space from the cache to make space for something I'm converting in response to a cache miss, I'm usually releasing a system resource.
This is unfortunate because, particularly for fragmentation reasons, I'd much rather be able to manage that space myself.
Also, my cache is based on "memory used" instead of "number of images" since there is so much variation in size from image to image.
Quote:
I have a pretty decent sprite-compression system left-over from my PocketPC library, but I'm not sure what to do with it. If I put these .PRI files into the bar as custom resources, I can then decode em myself when I need them, right ? What then though - do I need to decode into a bitmap then call convertbitmap on that, or is there a way to go direct ?
Unfortunately, in order to use IDISPLAY_BitBlt() you have to use CONVERTBMP(). There is no CONVERTPNG() or CONVERTJPG() or anything, unfortunately, and IIMAGE doesn't support transparency or blting subrects of the source image.
So you can use a custom format, but at some point you have to convert it to BMP, pass that to CONVERTBMP, then pass that result to IDISPLAY_BitBlit().
Quote:
I guess what I'm really hoping for is some struct somewhere that defines the bit-depth, strides and whatever else are needed for the display in question so I can write my own sprites routine to work from source directly. Is that doable ?
Those properties of the device native formats are undocumented. What's worse, the space consumed by CONVERTBMP() is usually 4-8 times that of the original image on many devices. I would LOVE to be able to implement my own tight RLE drawing routines, but short of reverse engineering each device's format, you're stuck with CONVERTBMP and BMP sources.
In my app I stick compressed BMPs in my resource file (which is custom but you could do this with the BREW Resource Editor), then uncompress them, then CONVERTBMP. The uncompressed-but-not-CONVERTED buffer is reused by lots of other stuff, so my cache purge is just of the CONVERTBMP handles.
Sorry for the bad news.
-Jesse

IIMAGE does support transparency blting and using subimages. You can use IIMAGE_SetParm to set the transparency blt operator, and use IIMAGE_SetDrawSize and IIMAGE_SetOffset to specify the subrect you are drawing from of.

IIMAGE does support transparency blting and using subimages. You can use IIMAGE_SetParm to set the transparency blt operator, and use IIMAGE_SetDrawSize and IIMAGE_SetOffset to specify the subrect you are drawing from of.

OK, thanks guys - some good answers here that will keep me going for a bit.
I'm surprised the the choice of graphics method is between "massive" or "very slow" though. I still don't know what to do about picking a method, but I guess I'll figure it out :S

OK, thanks guys - some good answers here that will keep me going for a bit.
I'm surprised the the choice of graphics method is between "massive" or "very slow" though. I still don't know what to do about picking a method, but I guess I'll figure it out :S

Quote:Originally posted by flarb
IIMAGE does support transparency blting and using subimages. You can use IIMAGE_SetParm to set the transparency blt operator, and use IIMAGE_SetDrawSize and IIMAGE_SetOffset to specify the subrect you are drawing from of.
Whoops! How did I miss that one?
That being the case, how does IIMAGE perform in comparison with CONVERTBMP/IDISPLAY_BitBlt?

Quote:Originally posted by flarb
IIMAGE does support transparency blting and using subimages. You can use IIMAGE_SetParm to set the transparency blt operator, and use IIMAGE_SetDrawSize and IIMAGE_SetOffset to specify the subrect you are drawing from of.
Whoops! How did I miss that one?
That being the case, how does IIMAGE perform in comparison with CONVERTBMP/IDISPLAY_BitBlt?

As far I know, IIMage is relatively slower than CONVERTBMP/IDisplay_BitBlt.
ruben

As far I know, IIMage is relatively slower than CONVERTBMP/IDisplay_BitBlt.
ruben

They should be about the same in the later handsets. All IImage is doing is CONVERTBMP then bliting, or that what I was told at the conference.

They should be about the same in the later handsets. All IImage is doing is CONVERTBMP then bliting, or that what I was told at the conference.

Based on the memory used by IImage on the handsets and the emu, yes, that's all it's doing.
The reason this catches people by surprise is IImage and CONVERTBMP on the 1.1 emu converts to an 8-bit bmp, but on the 12 and 16-bit phones you get a 16-bit image. If you're using the 2.0 emu with a 16-bit skin the memory used by IImage and CONVERTBMP will more closely match the memory used on the phones.
Tom

Based on the memory used by IImage on the handsets and the emu, yes, that's all it's doing.
The reason this catches people by surprise is IImage and CONVERTBMP on the 1.1 emu converts to an 8-bit bmp, but on the 12 and 16-bit phones you get a 16-bit image. If you're using the 2.0 emu with a 16-bit skin the memory used by IImage and CONVERTBMP will more closely match the memory used on the phones.
Tom

Indeed, images are instantly converted into the device's native format upon being loaded into IIMAGE objects via ISHELL_LoadResImage. The difference between this and CONVERTBMP() is that the latter also has the device-independent bitmap still in memory (until you free it).
This being said, why is it that IIMAGE sometimes fails to draw images if there isn't enough free heap memory (and the image has already been loaded)? (This seems to be an image-size dependent thing)

Indeed, images are instantly converted into the device's native format upon being loaded into IIMAGE objects via ISHELL_LoadResImage. The difference between this and CONVERTBMP() is that the latter also has the device-independent bitmap still in memory (until you free it).
This being said, why is it that IIMAGE sometimes fails to draw images if there isn't enough free heap memory (and the image has already been loaded)? (This seems to be an image-size dependent thing)

i dont get it,
so is there a size differnce between IImage and convertbmp or not?

i dont get it,
so is there a size differnce between IImage and convertbmp or not?

Quote:Originally posted by Vexxed
Based on the memory used by IImage on the handsets and the emu, yes, that's all it's doing.
The reason this catches people by surprise is IImage and CONVERTBMP on the 1.1 emu converts to an 8-bit bmp, but on the 12 and 16-bit phones you get a 16-bit image. If you're using the 2.0 emu with a 16-bit skin the memory used by IImage and CONVERTBMP will more closely match the memory used on the phones.
Tom
I've noticed that a lot of the emulator configurations are set to 8 bit, even for 16-bit or 12-bit devices. I've been getting in the habit of twizzling my device configurations up to 16 to give me a better sense of the expense of these things: also, of course "repairing" the screen sizes.
I've got another trick. My eyes are lazy: I prefer large font sizes in my code editors, and I hate squinting at the screen. I've found that reworking the emulator configurations to have screen window sizes that are 2x2 times the size of the screen resolutions to be a big help.
-jhw

Quote:Originally posted by Vexxed
Based on the memory used by IImage on the handsets and the emu, yes, that's all it's doing.
The reason this catches people by surprise is IImage and CONVERTBMP on the 1.1 emu converts to an 8-bit bmp, but on the 12 and 16-bit phones you get a 16-bit image. If you're using the 2.0 emu with a 16-bit skin the memory used by IImage and CONVERTBMP will more closely match the memory used on the phones.
Tom
I've noticed that a lot of the emulator configurations are set to 8 bit, even for 16-bit or 12-bit devices. I've been getting in the habit of twizzling my device configurations up to 16 to give me a better sense of the expense of these things: also, of course "repairing" the screen sizes.
I've got another trick. My eyes are lazy: I prefer large font sizes in my code editors, and I hate squinting at the screen. I've found that reworking the emulator configurations to have screen window sizes that are 2x2 times the size of the screen resolutions to be a big help.
-jhw