I recently have been working on learning some threejs stuff and trying to understand 3d concepts in general. One of my favorite things is looking at 3d renderings of planets and other cosmic things. So I found this post that talks about how he created the planets. He uses the textures from planetpixelemporium for his planets. Beyond the simple textures there are textures that define the bump map for rocky planets, textures for specular which defines the reflectiveness for each area, textures for clouds, and even textures for rings.
While I was going through his blogpost and his code, trying to make sure I understood everything he was saying, I came across this:
I was super confused, and I apparently wasn't the only one based on his comments. So armed with just his code, and google I set out to see what the deal is with multiple jpegs and transparency. I found all kinds of interesting things out there and why people would write code doing this instead of using a png or another format that supports transparency like was done for this android app, was done for a game in JS, and this interesting trick that XORs two images to get a transparent one.
So armed with reasons why I wanted to visualize what this actually looks like when we go through the data and merge the two jpegs into an image with alpha. I originally wanted to do this all in jsfiddle like I normally put my examples, but unfortunately jsfiddle doesn't support hosted images, I didn't want to pay one of the other online sandboxes to host them, and you can't look at the image data in a canvas that is from a different domain. So I put this on a website I have been playing around with for a bit, so you too can see it in action. The source is on github here.
To get this to work we will use 3 canvases (these don't have to be in the DOM to work). One for the source image, one for the alpha (transparency) image which is all greyscale, and one for the destination. We then load the two images we want by creating Image elements for each of the images, setting the src to the appropriate image, and then handling the onload event of each image. The code looks like this:
Once that is done and we have the images loaded on the two canvases we will need to loop through the image data and copy over the RGB components from the source canvas and compute the alpha based on the brightness of the greyscale with white (255) being completely transparent, and black (0) being completely opaque (you can make this up however you want realistically). Here is the code for that:
The Timeout code is there just to slow it down for my demo. For actual production you would not have the timeout and the speed. You would just loop until you hit the imageHeight.
So take a look at it, it is pretty cool visualization.
Cheers!
While I was going through his blogpost and his code, trying to make sure I understood everything he was saying, I came across this:
We build canvasCloud and use it as texture. It is based on the jpg images you see above: one for the color and the other for the transparency. We do that because jpg doesn’t handle an alpha channel. So you need to make the code to build the texture based on those images.
I was super confused, and I apparently wasn't the only one based on his comments. So armed with just his code, and google I set out to see what the deal is with multiple jpegs and transparency. I found all kinds of interesting things out there and why people would write code doing this instead of using a png or another format that supports transparency like was done for this android app, was done for a game in JS, and this interesting trick that XORs two images to get a transparent one.
So armed with reasons why I wanted to visualize what this actually looks like when we go through the data and merge the two jpegs into an image with alpha. I originally wanted to do this all in jsfiddle like I normally put my examples, but unfortunately jsfiddle doesn't support hosted images, I didn't want to pay one of the other online sandboxes to host them, and you can't look at the image data in a canvas that is from a different domain. So I put this on a website I have been playing around with for a bit, so you too can see it in action. The source is on github here.
To get this to work we will use 3 canvases (these don't have to be in the DOM to work). One for the source image, one for the alpha (transparency) image which is all greyscale, and one for the destination. We then load the two images we want by creating Image elements for each of the images, setting the src to the appropriate image, and then handling the onload event of each image. The code looks like this:
function loadImages(sourceImageUrl, alphaImageUrl){ sourceImg = new Image(); sourceImg.addEventListener("load", function(){ sourceCanvas.height = sourceImg.height; sourceCanvas.width = sourceImg.width; sourceContext.drawImage(sourceImg, 0,0); alphaImg = new Image(); alphaImg.addEventListener('load', function(){ alphaCanvas.height = alphaImg.height; alphaCanvas.width = alphaImg.width; alphaContext.drawImage(alphaImg, 0, 0); startOverlay(); }); alphaImg.src = alphaImageUrl; }); sourceImg.src = sourceImageUrl; }
Once that is done and we have the images loaded on the two canvases we will need to loop through the image data and copy over the RGB components from the source canvas and compute the alpha based on the brightness of the greyscale with white (255) being completely transparent, and black (0) being completely opaque (you can make this up however you want realistically). Here is the code for that:
function startOverlay(){ destinationContext.clearRect(0,0, destinationCanvas.height, destinationCanvas.width); var sourceData = sourceContext.getImageData(0, 0, sourceCanvas.width, sourceCanvas.height), alphaData = alphaContext.getImageData(0,0, alphaCanvas.width, alphaCanvas.height), destinationData = destinationContext.getImageData(0,0, destinationCanvas.width, destinationCanvas.height); var mergeProperties = $scope.mergeProperties; mergeProperties.x = 0; mergeProperties.y = 0; mergeProperties.offset = 0; function mergeImages(){ for(var i = 0; i < mergeProperties.speed; i++){ destinationData.data[mergeProperties.offset+0] = sourceData.data[mergeProperties.offset+0]; destinationData.data[mergeProperties.offset+1] = sourceData.data[mergeProperties.offset+1]; destinationData.data[mergeProperties.offset+2] = sourceData.data[mergeProperties.offset+2]; destinationData.data[mergeProperties.offset+3] = 255 - alphaData.data[mergeProperties.offset+0]; mergeProperties.x++; mergeProperties.offset+=4; if (mergeProperties.x >= sourceImg.width){ mergeProperties.x = 0; mergeProperties.y++; } if (mergeProperties.y >= sourceImg.height){ destinationContext.putImageData(destinationData,0,0); return; } } destinationContext.putImageData(destinationData,0,0); $timeout(mergeImages, 10); } mergeImages(); }
The Timeout code is there just to slow it down for my demo. For actual production you would not have the timeout and the speed. You would just loop until you hit the imageHeight.
So take a look at it, it is pretty cool visualization.
Cheers!
No comments:
Post a Comment