Fast rendering in AIR 3.0 (iOS & Android)

While we await the release of Stage3D, alot of developers seem to be under the impression that you can not do performant games on mobile devices with AIR 3.0.

The truth is, with a little help, the power of the GPU is already unlocked for us!

In this write-up, I’ll show you how to increase rendering performance in AIR by 5x over the normal display list. The end result is a rendering engine which is faster that iOS5′s hardware accelerated canvas! (more on this in a follow up post…)

Say What?

The Adobe Engineers have done a fantastic job of re-architecting the GPU Render Mode in AIR 3.0. With a little bit of work on the developer’s end we can coax amazing performance out of these little devices. So, how does it work?

  • Set <renderMode>gpu</renderMode> inside your -app.xml file.
  • Use Bitmap’s to render your content
  • Cache bitmapData across your classes/application.

That’s almost too simple right? But it works, I promise :p The Bitmap() class is just insanely optimized right now, Kudos to the entire AIR 3.0 team for such an amazing step forward.

So, to boil it down: Use the bitmapData.draw() API to manually cache your displayObjects, save that bitmapData object in a shared cache or static property, and then render a bitmap in place of your displayObject. Essentially, you are writing your own cacheAsBitmap function, but one using a shared cache.


Lets get to the good stuff first shall we? Charts!

We’ve run a stress test across a variety of devices, and the results are below. Each tests consists of one shared texture, with rotation, alpha and scale, the test continues to add sprites while trying to maintain 30fps. We compare standard CPU Mode rendering, with the same test in GPU Mode.\

You can view an HTML version of the test here, so you know what I’m talking about:

Loading chart…

You can clearly see the massive difference in performance. Also, it’s important to note that the CPU Tests are fairly well optimized, as we’re still using bitmaps with shared bitmapData properties which is a good optimization technique. This is not some contrived example to make the CPU renderer seem poor.

[Update] Someone asked about copyPixels. CopyPixel’s will be somewhere in between the two tests, it’s faster than displayList, but slower (and considerably less flexible) than using the shared bitmapData technique. As new devices come out, with higher resolution display’s, copyPixels will fall further and further behind (see comments for more details).

Example Code

Ok, enough talk, let’s see some code!

For example 1, lets say I have  a SnowballAsset class in an FLA, exported for actionscript, it’s a nice vector snowball in a movieClip, and I want to render it really fast in GPU Mode.

public class SnowBall extends Sprite
//Declare a static data property, all instances of this class can share this.
protected static var data:BitmapData;
public var clip:Bitmap;
public function SnowBall(){
		var sprite:Sprite = new SnowBallAsset();
		data = new BitmapData(sprite.width, sprite.height, true, 0x0);
		data.draw(sprite, null, null, null, null, true);
	clip = new Bitmap(data, "auto", true);
	//Optimize mouse children
	mouseChildren = false;

Now I can simply spawn as many SnowBall()’s as I need, and they will render with what is essentially full GPU acceleration. Note that on this simple example, your assets must have a internal position of 0,0 in order for this to work properly (but you can live with that for a 5x increase in speed…right? Or just add a few lines of code to figure it out…)

In this next example, we’ll make a similar class, but this one is re-usable, you just pass in the class name of the asset you want to use. Also, sometime’s you want to maintain the ability to scale your vector, and have it still look good. This can be achieved easily by oversampling the texture, before it’s uploaded to the GPU.

public class CachedSprite extends Sprite
//Declare a static data cache
protected static var cachedData:Object = {};
public var clip:Bitmap;
public function CachedSprite(asset:Object, scale:int = 2){
	//Check the cache to see if we've already cached this asset
	var data:BitmapData = cachedData[getQualifiedClassName(asset)];
		var instance:Sprite = new asset();
		var bounds:Rectangle = instance.getBounds(this);
                //Optionally, use a matrix to up-scale the vector asset,
		//this way you can increase scale later and it still looks good.
		var m:Matrix = new Matrix();
                m.translate(-bounds.x, -bounds.y);
		m.scale(scale, scale);
                data = new BitmapData(bounds.width * scale, bounds.height * scale, true, 0×0);
		data = new BitmapData(instance.width, instance.height, true, 0x0);
		data.draw(instance, m, null, null, null, true);
		cachedData[getQualifiedClassName(asset)] = data;
	clip = new Bitmap(data, "auto", true);
	//Use the bitmap class to inversely scale, so the asset still
	//appear to be it's normal size
	clip.scaleX = clip.scaleY = 1/scale;
	//Optimize mouse children
	mouseChildren = false;
[Update] Based on some a couple comments, I’ve updated this to include the code for assets which are not aligned at 0,0 internally. This should avoid any clipping.

Now I just create as many instances of this as I want, the first time I instanciate an asset type, there will be a draw hit, and an upload hit as it’s sent to the GPU. After that these babies are almost free! I can also scale this up to 2x without seeing any degredation of quality, did I mention that scale, rotation and alpha are almost free as well!?

With this class all assets of the same type will use a shared texture, it remains cached on the GPU, and all runs smooth as silk! It really is that easy.

It’s pretty trivial to take this technique, and apply it to a SpriteSheet, or MovieClip with multiple frames. Any Actionscripter worth his salt should have no problem there, but I’ll post up some helper classes in a follow up post.

*Note: It’s tempting to subclass Bitmap directly, and remove one extra layer from the display list, but I’ve found it’s beneficial to wrap it in a Sprite. Sprite has a mouseEnabled property for one thing, which a Bitmap does not (not sure why…), and using a Sprite is what allows you to easily oversample the texture, without the parent knowing or caring about it.

Detail details…

So what’s really happening under the hood?
  • With renderMode=GPU: when a bitmapData object is renderered, that bitmapData is uploaded to the GPU as a texture
  • As long as you keep the bitmapData object in memory, the texture remains stashed on the GPU (this right here is the magic sauce)
  • With the texture stashed on the GPU, you get a rendering boost of 3x – 5x! (depending on the GPU)
  • Scale, Alpha, Rotation etc are all extremely cheap

The beauty of this method is that you keep the power if the displayList, you can nest items, scale, rotate, fade all with extremely fast performance. This allows you to easily scale your apps and games to fit various screen dimensions and sizes, using standard as3 layout  logic.

The gotcha’s

Now, there are some caveats to be aware of, GPU Mode does have a few quirks:
  • You should do your best to keep your display list simple, reduce nesting wherever possible.
  • Avoid blendModes completely, they won’t work (If you need a blendMode, just blend your displayObject first, use draw() to cache it, and render the cache in a bitmap)
  • Same goes for filter’s. If you need to use a filter, just applyFilter() on the bitmapData itself, or apply it to a displayObject first, and draw() it.
In our next post, we’ll do some more comparisons,  this time with animated spritesheet’s. We’ll also post as a small class we use to rip SpriteSheet’s on the fly.
Update: The post on spritesheet’s is up:


We’ve released our finished game which is built on this rendering technique, so try it out and see what you think! The lite version is totally free.

The game includes many spritesheet’s (4 enemy types, 3 ammo types, multiple explosions etc). It uses tons of scaling and alpha overlay, and runs really well on everything from a Nexus One to iPad 2.

iPhone / iPad:


Amazon Fire:

Written by

39 Comments to “Fast rendering in AIR 3.0 (iOS & Android)”

  1. Steve Price says:

    What about copyPixels or getVector operations? Do you know if there is a CPU readback hit incurred if you try to read pixels off a BitmapData instance and put them to another BitmapData instance?

    • shawn says:

      CopyPixels is somewhere in the middle. It’s faster than the normal display list, but is nowhere near as fast as GPU Mode since you’re still totally reliant on the CPU. There _is_ a readback hit when trying to take a portion of one bitmapData and copy it to another.

      Also, copyPixels has major drawbacks in terms of flexibility, you can’t scale, you can’t alpha, you must pre-cache rotations etc… when everything is running off the GPU you don’t need to worry about any of that, just treat it like a normal display object.

      The other major drawback to copyPixels is that it is not scalable. Performance is directly related to the size of the screen. It will run wicked fast on a low resolution device, but with newer devices, like the 1280×720 Galaxy Nexus, or upcoming iPad 3 with Retina Display, it will run like a dog as it’s simply too many pixels to be pushing, and you’re gpu bandwidth becomes completely saturated. With GPU Acceleration, none of that matters, your stage size is largely irrelevant.

      • Steve Price says:

        Thanks for the feedback! I’ve been diving into iPad development at my job for the last month. I’m currently working on a simple menu UI component for AIR that matches the look and feel of an iOS pop-up menu with the gliding and bouncing behaviors of the real thing. I’ve gotten the glide/bounce behavior down, but I’ve been struggling (and failing) to get 30fps out of it. I’ve tried masking and scrollRect in CPU mode, but that has given me… quirks. Using getVector to sample from a bitmap of a sprite menu containing 60+ item buttons might fall in the “Goldilocks Zone” for that technique (if I get lucky).

        I really dig the UI on your apps, especially TouchUp Pro. Are you using that GPU move on that, too? Keep up the awesome work.

        • shawn says:

          I didn’t use GPU mode there because at the time AIR 2.7 didn’t have such a great implementation as AIR 3.0 has, and because it is so heavily based on filters, it might not be the best choice.

          Good luck with the iPad stuff! iPad 1 is a dog, just a really really poor performing device. It’s probably the slowest of the Apple Family, both a weak cpu and gpu.

          The good news is that iPad 2 is a beast, and iPad 3 is right around the corner :)

          • Steve Price says:

            The thing that really killed me about that menu was that with scrollRect and cacheAsBitmap it worked great testing on my Windows dev machine. My main thing to fix was to make sure AIR wasn’t rendering the regions of the screen under the non-visible parts of the menu, and cAB did that just fine on AIR for Windows. When I got it on my test machine, not only was it rendering the masked regions of the menu, I could interact with the menu by scrolling and tapping on the masked-out, invisible parts of it!

            So that’s one less magic bullet that works in AIR for iOS, unfortunately. Still, I’m glad to know its not just me and that better hardware is on the horizon.

  2. shawn says:

    Ya, scrollRect is basically a no-go on mobile, basically forget that API even exists. Believe it or not… old school masking is the way to go. Round and round we go!

  3. Kevin N. says:

    How big are your Bitmap Assets? I am able to get up to 1000 bunnies in BunnyMark in GPU mode on iPhone 4S.

  4. Burvs says:

    I’m interested to see how you’re going to do spritesheet animation in your next post. In my own tests so far as soon as I add blitted animation to my sprites I’m getting a much better result keeping it in cpu mode and doing full-stage blitting than using gpu mode.

    • shawn says:

      CopyPixels is faster on mobile? That’s bizzare… from my initial testing CopyPixels tops out at around 100 animated sprites @ 30fps on an iPad 2, where GPU mode can push around 970 smoothly (No Alpha, Rotation, or Scale to make it a fair test)

      I’ll be putting up a small tutorial on my technique soon, basically it involves embedding a PNG spritesheet, chopping up each bitmapData, and then swapping those bitmapData’s every frame.

      What kind of numbers are you seeing with CopyPixels? What devices? This technique is highly GPU dependant, which can vary wildly between devices, much more than CPU does.

      p.s. Nice site, but damn if I’ve ever seen a site screaming for Stage3D support it’s yours! Use ND2D and you could have the entire background rendered for basically free on the GPU, bringing your cpu useage down by a factor of 10x or more…

      • Burvs says:

        Okay… after some further experiments here’s a bit more detail about what I’m getting…

        I’m testing on an HTC Desire (which is basically the same as a Nexus One), so it’s nearly a two year-old phone, although capable of running some pretty decent games.

        I’ve a got a Sprite about 80px square which I’m animating using copy pixels from a spritesheet (8 frames of animation).

        To start with I had each instance of the sprite containing its own bitmapData and doing its own copy pixels on every frame update. In GPU mode I can only get about 15 sprites running at a steady 30fps.

        If I switch to CPU mode and do full stage blitting (i.e., no sprites at all, just one big bitmap canvas and lots of copyPixels on each update), I can get about 50 ‘sprites’ running at 30fps.

        However… inspired by your example above, I’ve found I can get about 50 sprites running at 30fps by using GPU mode, (so the same as my first example above) but having just a single shared bitmapData across all the sprites, so I’m only doing one copyPixels call every update, and sharing that bitmapData across all the sprites. This does mean that the sprites always have to be displaying the same animation frame as each other, but no big problem. I’m quite happy with using this technique for the particular game I’m working on so thanks for giving me the push to try GPU again.

        I had wondered about trying something along the lines you suggested in your reply where you’d store a sequence of bitmapDatas and just swap them every frame rather than doing copyPixels, so perhaps I’ll give that go, although I can’t see it helping massively, as even if I turn the blitting animation off altogether, I don’t get a massive frame rate improvement just scrolling sprites across the screen.

        RE my own site – it’s a bit neglected, to be honest I’d rather do a new one that was just a plain old html portfolio, but perhaps I’ll much around with the Stage3D stuff when I get time. :o )

        • shawn says:

          Good stuff! I’d definitely look into pre-caching the bitmapData, as it really is a waste to be using copyPixels (more than once per unique frame).

          I think you’ll see a big difference in speed with that technique if the frames were not in sync… as you have it now, you probably ghet a texture upload hit for each frame as you copyPixels a completely new bitmapData. But that hit is hidden, as it only applies to one sprite, and the other 49 can use that texture for free.

          But, if you had alternating animations, then you’d get multiple texture uploads for each unique frame.

          If you pre-cache them, then there’s only one initial hit, and then all those textures are just sitting on the gpu, stashed, waiting to be rendered.

          Anyways, glad this worked out!

          Oh, and a quick way to bump your framerate that I don’t mention in the article, stage.quality = LOW.

          • Haplo says:

            I can get 1000 copyPixel operations with bitmapdata containing alpha pixels in about 7-8 ms on my HTC Desire. This is done straight in the browser, though. But I assume the browser flash should be slower than an air app.

            I made a small benchmark for it here: . You can check the source code on the left, it does 100000 copyPixel operations on different locations on a 250×250 bitmapdata. It is pretty simple and doesn’t contain a button to run the test so the test runs immediately (together with the rest of the page loading) which makes it a lot slower. On my HTC Desire it took at first about 2000ms (20ms/1000 operations), but after a few reloads it went to 700/800ms.

            On my computer (Linux) it takes a bit more than 100ms (1ms/1000 operations) and on average it seems that flash on Linux (for copyPixels at least) is about 5-10 times faster than flash on my HTC Desire. Still it seems enough for the games I would like to make with a lot of bitmapdata copyPixel operations.

          • shawn says:

            The flaw in your tests is that the canvas size is far too small… With CopyPixels, the performance is directly related to the size of your canvas. If you test with a little 465×465 swf, well of course it will run fast.

            However, take that _exact_ same test, but increase your canvas to 1280×720 (Nexus) or 960×640(i4s), now instead of pushing 216k pixels/frame (465x 465), you’re uploading 921k pixels/frame, a 4x increase. Your CPU bandwidth becomes completely saturated.

            With a resolution of 1280×720, you’d find that 1 single copyPixels call would be enough to bring your framerate < 60fps, simply because of the cost of transferring all those pixels each frame.

          • Haplo says:

            Yes, it indeed becomes slower when you need to transfer a lot more pixels. Copying a 1280×720 bitmapdata takes about 9-10ms in the browser flash while only 0.3ms on my desktop: . I think it is probably faster in Air.

            However for the copying itself, the destination canvas/bitmapdata size is pretty much irrelevant for the time it takes to copy the bitmapdata’s. Only the size of the bitmapdata’s being copied (and the number of them) actually counts. If you copy 1000 times a 25×25 bitmapdata on a 100×100 canvas or on a 2000×2000 canvas, the time will be the same (only drawing the canvas on screen will be slower obviously).

            So what actually matters is the number and size of all the sprites/images that are drawn and I guess you assume that a larger canvas will have more stuff drawn on it – most likely, but that depends on the game itself. However, according to your benchmark, it seems like you would only be able to draw at most 600 sprites with the gpu – unless they are big they won’t fill the whole screen either?

          • shawn says:

            Right, but ” drawing the canvas on screen will be slower obviously” becomes your limiting factor on mobile, it’s not like desktop where this is a minor portion of the load. Not to mention, now your CPU is saturated, so you to share that for physics, ai etc…

            “However, according to your benchmark, it seems like you would only be able to draw at most 600 sprites with the gp”

            Sortof… it’s highly depenant on the size of the sprites. Little sprites render much faster than big. So, it’s accurate to say you can render 620 sprites which measure 80×67 px, with rotation, alpha and scale.

            Since CopyPixels does not support rotation, scale or alpha, the comparisons are already skewed.

            Trust me, CopyPixels is not the way forward, iPad 3 will be what 2048×1520!?? You just can’t push that many pixels each frame it’s totally unscalable.

          • Haplo says:

            According to the test, I can do 1000 copyPixels of 80×67 in about 30/40ms: . That seems to be getting closer already. Anyway, did you maybe do the same test with 25×25 pixels and what are those results? Those are the more typical/average sizes (at least for me) and hence more important to me.

            I agree that copyPixel is hard to scale to very big/ridiculous (so ridiculous I might buy one) resolutions on mobiles with slow processors. In that case the display list with gpu would work, but probably only with very few animations/sprites/movement and have big problems otherwise. You would really need to have real access to the gpu – something more similar to stage3D.

            By the way, what do you mean that copyPixels doesn’t support alpha? The bitmapdata that is being copied can have transparency (alpha). And if you mean an alpha mask that is possible too, but a lot more expensive.

            And also please don’t get me wrong, gpu rendering of the display list is a big improvement. It just doesn’t seem to outperform a fast/simple blitting alternative.

          • shawn says:

            If I’m reading that, that test is giving me 2922 ms on my galaxy nexus…?

            Your numbers just seem really weird to me, everything I’ve ever tested in AIR in fullscreen, copyPixels has not even come remotely close. Especially on iOS, on Android sometimes it can be close, cause Android are typically have a good cpu and weak gpu, whereas iOS devices have weak cpu and very strong gpu.

            But I would LOVE an apples to apples comparison, did you check the post here:

            There I’ve included a copyPixels test, and the entire FlashBuilder project ready to go, I’d love it if you take that and see if you can tweak it to do better.

            * By not supporting alpha, I mean you can’t easily adjust the alpha of a display object. True you could use an alpha mask (ouch).

          • Haplo says:

            That would be for 100000 copyPixel operations, so about 30ms for 1000 operations. Although you have to be careful with the number as it seems the whole page (like the swf showing the code) is interfering.

            I’ll try to see what I can do with your code and I’ll try to put it in an actual Android app. Unfortunately this will take a while.

  5. Burvs says:

    Duh! I hadn’t even though of try stage quality. Makes a massive difference! Thanks!

  6. John S says:

    Unfortunately I’m sort of getting a kick in the pants upgrading from 2.7.1 to 3.1 in GPU mode for iOS.

    I was getting a really nice framerate before.. and yes, setting it down to StageQuality.LOW gives me similar performance but kills a lot of the nicely rendered stuff I was getting too.. maybe I’ll just downgrade for this project.

    • shawn says:

      Well you can’t just flick a switch and expect it to run fast. GPU Mode is all about managing your textures, the same way you’d have to do it with Stage3D to get good performance.

      Basically, anything that’s on your timeline, or rendered with vector’s, needs to be converted to bitmapData. Any identical bitmapData’s need be shared. Do those two things and your app will sing :)

  7. Paul says:

    Thanks for the great article :)

    I ran some tests myself on

    Samsung Galaxy ( i9000 )
    stage.quality = StageQuality.LOW

    and found the following:

    - I can achieve around 300 objects @ 30fps

    - On my phone using shared bitmap assets does not make any difference to directly adding a sprite ( containing one single transparent bitmap ) from the library

    - object size does not matter, it’s the same if the sprite is sized 16×16 or 128×128 pixels

    Well, performance is not bad, but I’m waiting for Stage3D as I need more :)

    • shawn says:

      Right, thanks for pointing that out. The flashpro team has optimized library assets, so if you use a shared bitmap asset from the library you’ll get similar performance boost. The runtime is smart enough to do what we’re doing, behind the scense.

      But that’s only limited to bitmap’s in your library, which is alot less powerful than being able to construct your own bitmapData’s dynamically from any display object / movieclip or spritesheet.

  8. 01101101 says:

    Hi Shawn
    Thanks for this nice article!

    I’m implementing the CachedSprite method right now, and I just wanted to point out a small problem with the way you handle the scale. The size of the BitmapData should take it into account so that the asset doesn’t get cropped.
    Also I’ve added a translate call on the matrix in case the asset’s anchor point is not exactly in the top left corner of the source clip.

    if (!data){
    var instance:Sprite = new asset();
    var bounds:Rectangle = instance.getBounds(this);
    //Optionally, use a matrix to up-scale the vector asset,
    //this way you can increase scale later and it still looks good.
    var m:Matrix = new Matrix();
    m.translate(-bounds.x, -bounds.y);
    m.scale(scale, scale);
    data = new BitmapData(bounds.width * scale, bounds.height * scale, true, 0×0);
    data.draw(instance, m, null, null, null, true);
    cachedData[getQualifiedClassName(asset)] = data;

    Thanks again for those tips anyway! :)

    • Andy Moore says:

      01101101 has a good point there, I implemented those changes too after seeing some cropping. And good thinking ahead with the Bound calls.

      I changed a few lines as well, that allows this class to be used with any type of data – bitmap, sprite, or “Class” (if you’re embedding assets) as well:

      var instance:Sprite = new Sprite();
      instance.addChild(new asset());

      Simple change :)

  9. Qronicle says:

    So I made a simple pong-like game with a bitmap background, bitmap ball(s) and bitmap player thingies. When using GPU mode I only got 40-45 FPS, CPU mode does it at a constant 60fps.
    [I was really suprised to see me only getting 44 FPS in GPU mode, so I switched to CPU to check if I didn't make any mistakes in my app.xml file. Was I surprised I suddenly got my target FPS :s]

    I’m not really sure what I should do to get more performance out of GPU. But for now I’ll continue with CPU rendering.

    • shawn says:

      Hmm something is wrong. Well unless you’re testing on a Nexus One or something… it has a strong CPU and weak GPU so in very simple scenes like the one you describe, CPU can still outperform. It’s very rare though, don’t put much stock in any one device.

  10. Rob says:

    Great tips, this approach turned an unusable demo into a fullspeed demo!

    However when used on larger Sprites I encountered a lot of down sampling (I was heading beyond 2880 pixels on iOS).

    I understand that many GPU units have a maximum texture size of 1024×1024 and therefore the textures will be shrunk to fit.

    Therefore the workaround is to nest tiled Bitmaps if the original is >1024. Certainly anything >2880 break into pieces or rethink how the original asset is composed to best match the above strategies.

  11. Shawn, great technique! Thanks!
    I have a question: I could keep this performance, refactoring CachedSprite class to return a Sprite, instead of being one?

    • shawn says:

      Sure I don’t see why not, the point is just that it contain a bitmap which shares it’s bitmapData object.

  12. ClaverFlav says:

    So is the true benefit based on all bitmaps of that particular class using the same static bitmapdata?

    If so do u think it would be bad to have a separate class (some sort of AssetManager) hold these statics instead? (So i dont have to change the content in a bunch of locations)

    Also thanks for figuring this out.

  13. aaron says:

    I think the process of fillRect().. ing the canvas bitmap is the major cause of slowdown when using copyPixels on large resolutions. As someone here pointed out the whole screen need not be copyPixelled. It depends on the number of objects. I think instead of performing fillRect () to clear the entire canvas it would be better to assign a bitmapData to the canvas bitmap.

    eg. canvasBitmap.bitmapData=your predrawn bitmapData , instead of canvasBitmap.fillRect(). Has anyone tested it out?

  14. aaron says:

    To put it short,

    1. Draw your canvas bitmapData and cache it in GPU.
    2. At each update : canvas.bitmapData=cachedBitmapData;
    3. copyPixels(your objects on to the canvas).

    Would it slow down on high res devices?

    • shawn says:

      Haven’t tried it, but I doubt it will be very performant on something like Retina iPad since it’s still an awful lot of data to transfer every frame.

      Also, what game have you ever made that has no background at all? Even if it does work, the use case would be extremely limiting: no rotation, no alpha, no scale, no fullscreen background. Meh…

      Just do bitmapData swapping, it works with GPU’s as they are meant to work with. If not that, jump right to Starling. Blitting is like the DoDo bird, ancient history :)

  15. aaron says:

    Thats true. It will slow down on Retina.

    The game I am doing has got lots of bullets flying around. Swapping bitmaps for each bullet would involve having a bitmap on stage for each bullet which I want to avoid.

    I was thinking of blitting the bullets on to a separate bullet layer bitmap using copyPixels. Maybe store the dirty rects in an array to clean up afterwards, instead of fillrecting the entire bullet layer bitmap.

    Youre right. Starling `is probably the way to go for future though.

  16. Arwin says:

    This is a wonderful post! I tried to use the class CachedSprite But I got an error:
    TypeError: Error #1007: Instantiation attempted on a non-constructor.
    at CachedSprite()
    at Caching_fla::MainTimeline/frame1()

    when I do this on a main timeline;

    var m:Sprite = new mcs();
    var ob:Object = new Object;
    var cSprite = new CachedSprite(ob,2);

    I have no Idea why I got the error on instantiation?

  17. [...] Shawn’s original article laid out some source code and a longer explanation if you want to get into details and performance charts. His code does all of the conversions automatically for you, and the discucssion in the comments of his article improved upon it. I added a few tweaks myself, and it is now the only class I use for any type of image data. Imported .PNG file? Sprite or MovieClip in a .SWC? Class reference to an object you custom made? Doesn’t matter! All automated, all quick, all easy to use. Best of all: the code is really short, simple, and easy to read in about a minute.  [...]

Leave a Reply