Code Size in Brew | developer.brewmp.com Code Size in Brew | developer.brewmp.com

Developer

Code Size in Brew

Forums:

Hi !

I am a newbie to Brew development. I am currently developing a game for Brew 1.1 enabled mobile phones and have a bit of a problem. Most development of the game was done under windows with a self-build underlaying framework. It is written in C++ and there are around 25 classes in the code. When porting it over to BREW I had a few problems but most were overcome easyly. Now the biggest problem is code size - I have NOT tried the game on any phone YET - only on the emulator (it runs fine on 2.0 emulator, while 1.1 is REALLY slow) - but that is not the problem (again) ... The problem is that the .dll file of the game is over 400k in size !!! While the windows counterpart (exe or dll) is only 80k ... HOW is this possible ? I fear to compile it with gcc for arm and real mobile phone because it will probably be even bigger.

Is there ANY way for me to reduce the DLL size ? I cannot believe that windows .exe is only 80k while the Brew 1.1 DLL is over 400k big :(

Best regards and thanks for any feedback in advance !

Tomaz

Don't worry about the dll size, the mod file will be much smaller.

Don't worry about the dll size, the mod file will be much smaller.

Thanx ! :-)
I just found what was causing it - the Code Generation was set to "Debug Multithreaded" instead to "Multithreaded DLL" - now the size is 28k - made me very happy. Even with both optimisations set to Maximize Speed the code size is 36k.
One more question - in the emulator (2.0) the timer interrupt takes around 2ms to get acknowledged (well it is less than 10ms since it seems to go in 10ms steps) , while in 1.1 emulator it takes full 40ms to get acknowledged - is this normal ? i.e. if I set the Timer to execute the function in 0 ms it actuallt takes 40ms.
Also what is the actual speed of the devices compared to 2.0 emulator ? In 1.1 emulator if I fill whole screen of T720 Motorola phone with 16x16 images it takes 60ms to refresh it... if I set the blit mode to MASK instead of COPY it takes whole 320 ms ! While on 2.0 emulator it takes less than 10ms to do in both modes. Very confusing :)
Can't wait to try it on a real thing though.
Best regards,
Tomaz

Thanx ! :-)
I just found what was causing it - the Code Generation was set to "Debug Multithreaded" instead to "Multithreaded DLL" - now the size is 28k - made me very happy. Even with both optimisations set to Maximize Speed the code size is 36k.
One more question - in the emulator (2.0) the timer interrupt takes around 2ms to get acknowledged (well it is less than 10ms since it seems to go in 10ms steps) , while in 1.1 emulator it takes full 40ms to get acknowledged - is this normal ? i.e. if I set the Timer to execute the function in 0 ms it actuallt takes 40ms.
Also what is the actual speed of the devices compared to 2.0 emulator ? In 1.1 emulator if I fill whole screen of T720 Motorola phone with 16x16 images it takes 60ms to refresh it... if I set the blit mode to MASK instead of COPY it takes whole 320 ms ! While on 2.0 emulator it takes less than 10ms to do in both modes. Very confusing :)
Can't wait to try it on a real thing though.
Best regards,
Tomaz

Don't rely on the emulator to estimate the speed at which it will run on the phone, it does not even attempt to simulate it afaik. Also, use the 2.0 emulator; the 1.0 emulator isn't as good and is very, very slow (actually slower than the target phone last time I checked!).

Don't rely on the emulator to estimate the speed at which it will run on the phone, it does not even attempt to simulate it afaik. Also, use the 2.0 emulator; the 1.0 emulator isn't as good and is very, very slow (actually slower than the target phone last time I checked!).

On top of that on the actual handsets, timers are inaccurate anyway because they are no real-time interrupts, but events driven through the system.

On top of that on the actual handsets, timers are inaccurate anyway because they are no real-time interrupts, but events driven through the system.

Thank you. I was gonna use 2.0 emulator since 1.1 was REALLY slow - it was so slow I couldn't beleieve it.
I have one more question about speed:
Is it better (faster to display) to Load all Images as Windows BMP files into memory and then use IDISPLAY_BitBlt() to display them on the screen or is it faster to load them as IImage files and use IIMAGE_Draw() to display them ? I am doing a scrolly game and I am displaying the whole display every frame basically, so the speed of blitting a bitmap to the screen is cruical.
Also if the IDISPLAY_BitBlt() is faster than IIMAGE_Draw(), is there ANY way to load the BMP's from resources instead of files and somehow get the BMP Pointer from them to use in the IDISPLAY_BitBlt() function ? I have tried and the emulator crashes every time...
Best regards,
Tomaz

Thank you. I was gonna use 2.0 emulator since 1.1 was REALLY slow - it was so slow I couldn't beleieve it.
I have one more question about speed:
Is it better (faster to display) to Load all Images as Windows BMP files into memory and then use IDISPLAY_BitBlt() to display them on the screen or is it faster to load them as IImage files and use IIMAGE_Draw() to display them ? I am doing a scrolly game and I am displaying the whole display every frame basically, so the speed of blitting a bitmap to the screen is cruical.
Also if the IDISPLAY_BitBlt() is faster than IIMAGE_Draw(), is there ANY way to load the BMP's from resources instead of files and somehow get the BMP Pointer from them to use in the IDISPLAY_BitBlt() function ? I have tried and the emulator crashes every time...
Best regards,
Tomaz

>On top of that on the actual handsets, timers are inaccurate >anyway because they are no real-time interrupts, but events >driven through the system.
Hmm, what is the best way to do timing in a game then ? I want the game to run at approximately 60 frames per second and this is what I do now : (basically I have logic_loop() and graphics_loop() - I need to call logic_loop exactly 60 times per second to have correct timing and graphics_loop as many times as possible - on 2.0 emulator it works fine...
static void TimerFunction(Globals * glob)
{
int i, num;
int old_time = glob->time_counter;
glob->time_counter = GETTIMEMS();
if (glob->time_counter - old_time <= TIME_CONST || glob-last_not_shown)
{
// Took us less than one frame to do previous loop
logic_loop(glob);
graphics_loop(glob);
i = GETTIMEMS();
num = 0;
if (i-glob->time_counter < TIME_CONST)
{
num = TIME_CONST-(i-glob->time_counter);
}
glob->last_not_shown = FALSE;

else
{
// Took us more than one frame, so do more logical num = (glob->time_counter - old_time)/TIME_CONST;
for (i=0; i < num; i++)
{
logic_loop(glob);
}
glob->last_not_shown = TRUE; // Just in case num = 0;

ISHELL_SetTimer(glob->a.m_pIShell, num, (PFNNOTIFY) ,TimerFunction, (uint32*) glob);

>On top of that on the actual handsets, timers are inaccurate >anyway because they are no real-time interrupts, but events >driven through the system.
Hmm, what is the best way to do timing in a game then ? I want the game to run at approximately 60 frames per second and this is what I do now : (basically I have logic_loop() and graphics_loop() - I need to call logic_loop exactly 60 times per second to have correct timing and graphics_loop as many times as possible - on 2.0 emulator it works fine...
static void TimerFunction(Globals * glob)
{
int i, num;
int old_time = glob->time_counter;
glob->time_counter = GETTIMEMS();
if (glob->time_counter - old_time <= TIME_CONST || glob-last_not_shown)
{
// Took us less than one frame to do previous loop
logic_loop(glob);
graphics_loop(glob);
i = GETTIMEMS();
num = 0;
if (i-glob->time_counter < TIME_CONST)
{
num = TIME_CONST-(i-glob->time_counter);
}
glob->last_not_shown = FALSE;

else
{
// Took us more than one frame, so do more logical num = (glob->time_counter - old_time)/TIME_CONST;
for (i=0; i < num; i++)
{
logic_loop(glob);
}
glob->last_not_shown = TRUE; // Just in case num = 0;

ISHELL_SetTimer(glob->a.m_pIShell, num, (PFNNOTIFY) ,TimerFunction, (uint32*) glob);

60 frames per second? There is no device available today that can do that.. i think 10~15 is the best that people have got on current devices so far, with the average in the single digits.
Keep in mind the Emulator doesnt actually "emulate" the cpu of the devices.. so the performance will most likely be nothing like an actual handset.. they made a step in the right direction of the 3.0 sdk by renaming it to "Simulator" which is more like what it is. So anything performance wise should be tested on a handset as soon as you can.
-Tyndal

60 frames per second? There is no device available today that can do that.. i think 10~15 is the best that people have got on current devices so far, with the average in the single digits.
Keep in mind the Emulator doesnt actually "emulate" the cpu of the devices.. so the performance will most likely be nothing like an actual handset.. they made a step in the right direction of the 3.0 sdk by renaming it to "Simulator" which is more like what it is. So anything performance wise should be tested on a handset as soon as you can.
-Tyndal

You obviously have never held a Brew phone in your hands, tom-cat. :)
60 fps is an illusion. As tyndal says, you have to be glad on some phones if you make 10 fps.

You obviously have never held a Brew phone in your hands, tom-cat. :)
60 fps is an illusion. As tyndal says, you have to be glad on some phones if you make 10 fps.

Ahh, I tought that this might be the case... Need to download the gcc for arm and everything asap and get an actual handset so I don't do too much stuff that will need changing in the end anyway.
Timer function can be changed pretty easyly later on... but the decision between IDISPLAY_BitBlt() and IIMAGE_Draw() would mean a lot of changes to the framework... Currently I use IIMAGE_Draw(), but need to set the Offset and Size of clipping everytime I want to draw... somehow I think the IDISPLAY_BitBlt() would be faster... but on the emulator I just can't see the difference :(
Best regards,
Tomaz

Ahh, I tought that this might be the case... Need to download the gcc for arm and everything asap and get an actual handset so I don't do too much stuff that will need changing in the end anyway.
Timer function can be changed pretty easyly later on... but the decision between IDISPLAY_BitBlt() and IIMAGE_Draw() would mean a lot of changes to the framework... Currently I use IIMAGE_Draw(), but need to set the Offset and Size of clipping everytime I want to draw... somehow I think the IDISPLAY_BitBlt() would be faster... but on the emulator I just can't see the difference :(
Best regards,
Tomaz

Yep - never held a Brew phone in my hands before (unfortunetly). Only played stuff on Ngage and some more "advanced" handsets... where 60 frames is possible ;-)
I tought that might be the games - slow frame rate... but never knew it could be as low as 10fps. Well will have to do the best out of it. And get a phone to test stuff on asap...
Best regards,
Tomaz

Yep - never held a Brew phone in my hands before (unfortunetly). Only played stuff on Ngage and some more "advanced" handsets... where 60 frames is possible ;-)
I tought that might be the games - slow frame rate... but never knew it could be as low as 10fps. Well will have to do the best out of it. And get a phone to test stuff on asap...
Best regards,
Tomaz

You definitely want to use IDSIPLAY_BitBlt. The best way to deal with it is to load the bmp out of the resource and then call CONVERTBMP The resulting pointer can be used for the blit and it is is significantly faster than the whole IIMAGE-shebang.

You definitely want to use IDSIPLAY_BitBlt. The best way to deal with it is to load the bmp out of the resource and then call CONVERTBMP The resulting pointer can be used for the blit and it is is significantly faster than the whole IIMAGE-shebang.

Thank you - will use IDISPLAY_BitBlt() then from now on - I tought it might be faster than IImage stuff.
Altough I tried doing exactly as you said - Load the image from Resource file and then do CONVERTBMP() - but it crashed ALWAYS (thats why I went to IIMAGE stuff). The resource bitmap is normal Windows BMP in 256 colours ... do you have a code snippet in which this works?
Tomaz

Thank you - will use IDISPLAY_BitBlt() then from now on - I tought it might be faster than IImage stuff.
Altough I tried doing exactly as you said - Load the image from Resource file and then do CONVERTBMP() - but it crashed ALWAYS (thats why I went to IIMAGE stuff). The resource bitmap is normal Windows BMP in 256 colours ... do you have a code snippet in which this works?
Tomaz

You are most likely forgetting to skip the MIME header after loading the image, I would assume.
bmSource = (AEEBmp *)ISHELL_LoadResData( curApp->a.m_pIShell, RES_FILE_NAME, (short) resID, RESTYPE_IMAGE );
dataBytes = (char *) bmSource + * ( (char *) bmSource ); // Skip the MIME header
imagePointer = CONVERTBMP( dataBytes, &imageInfo, &allocFlag );
This is the core in essence.

You are most likely forgetting to skip the MIME header after loading the image, I would assume.
bmSource = (AEEBmp *)ISHELL_LoadResData( curApp->a.m_pIShell, RES_FILE_NAME, (short) resID, RESTYPE_IMAGE );
dataBytes = (char *) bmSource + * ( (char *) bmSource ); // Skip the MIME header
imagePointer = CONVERTBMP( dataBytes, &imageInfo, &allocFlag );
This is the core in essence.

Thank you ! Thats what i forgot ...
btw. where do I get more info on usage of these functions ? I looked and didn't find this info about MIME header in either the API pdf reference or the FAQ/Knowledge base on the site ?
another question - should I just use the gcc to compile for the actual phone or is that commercial thing so much better ?
Best regards, and thank you for all your help !!!
Tomaz

Thank you ! Thats what i forgot ...
btw. where do I get more info on usage of these functions ? I looked and didn't find this info about MIME header in either the API pdf reference or the FAQ/Knowledge base on the site ?
another question - should I just use the gcc to compile for the actual phone or is that commercial thing so much better ?
Best regards, and thank you for all your help !!!
Tomaz

New BREW phones based on MSM6100 is capable of providing way higher frame rate.
In fact our application which runs on N-gage/Nokia 3650 in addition to BREW phone based on MSM6100 (prototype one), and we see that BREW phone beats out Nokia phone (in terms of performance, network download etc in a noticable extent). Verizon prototype audiovox CDM 9900 is a MSM6100 based phone.
ruben

New BREW phones based on MSM6100 is capable of providing way higher frame rate.
In fact our application which runs on N-gage/Nokia 3650 in addition to BREW phone based on MSM6100 (prototype one), and we see that BREW phone beats out Nokia phone (in terms of performance, network download etc in a noticable extent). Verizon prototype audiovox CDM 9900 is a MSM6100 based phone.
ruben

tom-cat,
Your compiler choice certainly depends on your budget, but let me say this, the ADS ARM compiler is among the best money can buy and in terms of speed, and code quality, GCC can't hold a candle to it.

tom-cat,
Your compiler choice certainly depends on your budget, but let me say this, the ADS ARM compiler is among the best money can buy and in terms of speed, and code quality, GCC can't hold a candle to it.

Dragon,
you say the ARM compiler is faster, and has better code quality.
What does that mean? That the actual compilation process is done in less time? Or that it generates code that actually executes faster than the GCC for Brew does?
Because I'm using the GCC since I am too damn poor right now to spring for the ARM compiler.
Care to offer an opinion, or is there somewhere where this is already all hashed out?

Dragon,
you say the ARM compiler is faster, and has better code quality.
What does that mean? That the actual compilation process is done in less time? Or that it generates code that actually executes faster than the GCC for Brew does?
Because I'm using the GCC since I am too damn poor right now to spring for the ARM compiler.
Care to offer an opinion, or is there somewhere where this is already all hashed out?

It is both. The ADS compiler is faster in terms of actual compile times. Given the small size of Brew projects that's usually not an issue, though.
More importantly, the ADS compiler generates code that is significantly faster in performance than the code generated by GCC, it generates also smaller code and the code is significantly better optimized. In fact the ADS ARM compiler is so good that I have had many cases where I was not able to optimize the code any further by hand, and I do have some 20+ years of experience in assembly programming. ADS is the best ARM compiler around, and given its strong syntax check and almost LINT-like code analysis, it is without question the best compiler I have ever come across - Intel's x86 compiler taking the second place.

It is both. The ADS compiler is faster in terms of actual compile times. Given the small size of Brew projects that's usually not an issue, though.
More importantly, the ADS compiler generates code that is significantly faster in performance than the code generated by GCC, it generates also smaller code and the code is significantly better optimized. In fact the ADS ARM compiler is so good that I have had many cases where I was not able to optimize the code any further by hand, and I do have some 20+ years of experience in assembly programming. ADS is the best ARM compiler around, and given its strong syntax check and almost LINT-like code analysis, it is without question the best compiler I have ever come across - Intel's x86 compiler taking the second place.