This week was really all about the dictionary. There was some file IO and that was nice, but I played with that a bit the week before working on parsing linkedin pages.
Tuesday, December 8, 2015
Tuesday, December 1, 2015
Week 6: Pair Programming++
This week was centered around two questions:
- How do you summon captain planet?
- What happens when you do?
The answers are 1) Teamwork and 2) He kicks ass. That's what we're talking about this week with Pair Programming++. Sure there was some good programming action that we had going on this week. I wrote a python script to parse out a thousand user ids from linked in and setup a script to automatically browse them which had linkedin boot me out and threaten me. We made a game too. But that's not much new. The real theme for the week was learning about how to work together with a few people in real-time on a small script without stepping on each others toes.
So here is my preferred method of teamwork, and like any good plan or get rich quick strategy it has 4 key elements.
- Google hangout - This is your bread and butter. Talk about stuff.
- Google drawings - What are you talking for if you could be drawing
- Codeshare.io - This is where you all work together on one file
- Github - Dump things here when you are done or if people work on the project during off peak hours have them submit pull requests with their code
Using this deadly combination of technology our group had a good time this week working on our adventure game. We co-ordinated via google hangouts, made drawings of our game map, collaborated in real-time on codeshare.io, and checked things in on github. For a while there I was worried we wouldn't get on track, but teamwork prevailed and by our powers combined... we made something cool.
My favorite python package of the week is PyAutoGui found here: https://pypi.python.org/pypi/PyAutoGUI
This lets you automate your PC in neat ways so you can interact with programs without having to learn their APIs or figure out other ways to hack them. Just pretend a person is using the PC. It can search the streen for images and such too.
Monday, November 23, 2015
CST205 Week 5 - go big or go home
![]() |
| Mr Rogers under the sea |
![]() |
| Putin and his horse on an adventure |
So how do you make a scuba themed filter?
- First you should generally shift the colors to be ocean themed, because water absorbs different colors at different rates, apply different factors to each color.
- Then you need lighting effects of course. But not just any lighting effects. You need an array of lighting effects to select from at random which all have different resolutions, but which you would like to resize to be approximately the same size as your original image to apply the filter to. Then if there is a bit of extra hanging off the sides of the image you need to crop it.
- Then you should throw up a border just in case your scaling functions messed up and you want to try to hide defects. This won't work.
The same applies for a fungi theme, but instead of lighting effects use geometric patterns fed through the same kind of matching routine.
![]() |
| Taking any lighting effect and apply it to your source image by first trying to rescale and crop it to be as close as possible |
Now I know what you're thinking, if my rescaling function didn't work perfectly how did I get some cool photos? That's where numbers come in. I generated 50 randomized pictures with each of my filters. These were broken down as follows:
6 Images to apply the filters to
testPics = ["mrrogers.jpg", "hq.jpg", "mattreg.jpg", "mattShirt.jpg", "mattTesla.jpg", "putin.jpg"]
5 Lighting effects for ocean scenes
oceanLights = ["lightrays1.jpg", "lightrays2.jpg", "lightrays3.jpg", "lightrays4.jpg", "lightrays5.jpg"]
5 Geometric patterns to apply to fungal scenes
fungalLights = ["geometric1.jpg", "geometric2.jpg", "geometric3.jpg", "geometric4.png", "geometric5.jpg"]
By using test functions that accepted a counter for test samples I was able to generate as many pictures as I wanted, save them to my laptop, and pick the best ones. For example:
def fungusTest(testSamples):
for count in range(0,testSamples):
source = testPics[random.randint(0,len(testPics) - 1)]
testPic = makePicture(getMediaPath(source))
testPic = fungalFilter(testPic)
writePictureTo(testPic, getMediaPath("_fungalResult_" + str(count) + ".jpg"))
This kind of test capability also allowed me to find problems with my software, find ways that it would crash, find index problems, cropping problems, etc, which lead me to the conclusion that my combination of scaleMatch, blend, and smartBlend (which uses scale matching and blending) has big problems. Primarily there are cut off sections that appear given certain constraints, for example:
But when you apply the same setting to Putin you are fine:
So clearly some refinement of scaleMatch is needed based on the results of my 100 image test set. A lot of good stuff was learned from spending this much time chugging along on python though. I got some good experience with datatype conversion and rounding, exception handling in case people don't copy my exact directory location for images, test generation, and image manipulation. Overall I am happy with how things went but I think this week could knock my grade down a bit due to poor time management but that's ok.
Tuesday, November 17, 2015
CST205 Week 4
So here we are, week 4. What's new? This week was mostly about organizing our existing code and providing sample images in a gallery. There was a new line tracing function reminiscent of a cellular automata program that used a simple rule to emphasize the outline of an object. The code for this is described in my previous post on my image manipulation library.
So this week was more organizing and getting ready to review other students than much new material for me personally. We setup our git organization at the beginning of the class so we are good to go on that front. What is exciting to me personally is our mid term assignment. I have chosen to do two filters, one that has a scuba diving theme and another that will have a microryzal theme. I am going to approach this project with the following action plan which is destined to be killed off as time goes on but reserve a minimum viable product:
- Minimum viable product
- Create each filter using static assets and resize the input photograph to fit a dedicated frame size
- Create base functions that are flexible but not overkill
- Add dynamic features
- Continue to use a dedicated frame size
- Apply weighted and bounded randomization to asset colorization and scaling
- Randomize asset distribution (mushrooms, kelp, etc) based on the generated array of randomized assets
- Implement key based scene generation
- Before performing asset generation or scene modification generate a randomized key of sufficient length
- Use that key to repeatably generate a scene so you can select from an array of scene presets or use your personal past favorite preset
- Key based dynamic scene generation with variable input image
- Variable input image size drives output image size
- Key determines quantity, location, colorization, and scaling of generated assets
If our scene assets are
kelp_01.jpg
kelp_02.jpg
kelp_03.jpg
fish_01.jpg
fish_02.jpg
fish_03.jpg
bubble_01.jpg
bubble_02.jpg
bubble_03.jpg
Then we have 9 fixed assets. We can use these to generate a ton of dynamic assets for sure by changing their size, position, rotation, reflection, etc.
asset1_minScale, asset1_maxScale, asset1_xCenter, asset1Weight, asset1ReflectionProb, .... assetN_minScale, assetN_maxScale
I'm not sure what all I would like to have in my key, but it would look something like "70,110,50, 10, 10..." Yielding a 70% minimum scale, 110% max scale, x center 50% of image width, 10% max deviation from the half way point, 10% chance of vertical reflection, and so on. Ideally you could just feed in an array of assets and this kind of key string to generate your scene based on key values. So without having to change the program I could figure out what my favorite setting is. But if I felt really crazy I could randomize that input string a bit too so that the randomization of assets is based on a randomized initial condition set. Then you could run some kind of normalization based on scene configuration variables.
So maybe my scuba scene and the microrizal scenes would be generated by overlapping functions with different assets and configuration parameters fed into them. Kelp for example may be on the sides of the scene because they are tall and you don't want to block the focus of the picture. But if you have small mushrooms they could be on running across the whole bottom of the scene if they aren't tall.
The trick in the end is to not force the user to use a fixed picture size. I hate that. I also don't want to just plop down a clown head on the person in the center of the frame and call it a done day. Hopefully that makes sense.
Sunday, November 15, 2015
Image Manipulation Library Gallery
The following is a gallery of images generated from my version of imagmanip.py, available here on github. On the one hand this assignment might seem like it sucks, but on the other hand working on a project with poor documentation is even worse...so I appreciate what is going on here. This is a nice way to document a library with example photos and the source code, but it would be nicer if there was a way to dynamically link the snippets to the functions in the github files in case the comments etc change. Or maybe just automatically generate the whole page with test code, test image set, and something like doxygen. Today though, we follow along with assignment!
As a side note it's important that setMediaPath() be called before using any of my functions that require several pictures so that you don't have to use absolute file names. Then I can call getMediaPath() to open that folder and grab pictures. Friends don't let friends use absolute file paths
Better black and white (lab #3)
Bottom-to-top mirror (lab #4)
There were a variety of mirroring functions in this project. This one, mirror vertical, lets you select if you want to mirror top to bottom or bottom to top(with a pyramid top to bottom looks better!). Then it traces from left to right copying the top row to the bottom row until it hits the middle of the picture. There were no real problems with this but it doesn't seem super fast.
# Function: Vertical image mirroring
# Params: 1) an image, 2) bool value for True mirrors top to bottom, False mirrors bottom to top
# Returns: mirrored image
def mirrorVertical(sourcePic, topBottom):
width = getWidth(sourcePic)
height = getHeight(sourcePic)
retPic = duplicatePicture(sourcePic)
yEnd = height / 2
# mirror top to bottom side of image
if topBottom == True:
for x in range(0, width):
for y in range(0, yEnd):
sourcePixel = getPixel(sourcePic, x, y)
destPixel = getPixel(retPic, x, height - y - 1)
sourceColor = getColor(sourcePixel)
setColor(destPixel, sourceColor)
#mirror bottom to top side of image
else:
for x in range(0, width):
for y in range(0, yEnd):
sourcePixel = getPixel(sourcePic, x, height - y - 1)
destPixel = getPixel(retPic, x, y)
sourceColor = getColor(sourcePixel)
setColor(destPixel, sourceColor)
return retPic
Shrink (lab #4)
Collage (lab #5)
Red-eye Reduction (lab #6)
Color Art-i-fy (lab #6)
Artify is a filter that basically bins an image. It has cutoff points which apply equally to each color. If you are inbetween certain thresholds then you get a fixed value. My real problem with this one was trying to index across an array of the rgb values and running the filter on them that way. Calling an extra function that does the binning of the colors worked fine though.
Here is the code I had a problem with before switching to a chunkier version.
for p in pixels:
r = getRed(p)
b = getBlue(p)
g = getGreen(p)
for color in [r, b, g]:
if color < 64:
color = 31
elif color < 128:
color = 95
elif color < 192:
color = 159
else:
color = 223
setRed(p, r)
setBlue(p, b)
setGreen(p, g)
So here we go with some functions and before/after photos.
Rose-colored glasses (lab #3)
This filter shows a bit of color modification. I decided to throw in an inversion at the end to really make it pop. Why invert? It makes it even more red. Like these blue glasses would end up being purple without the invert to really flip it. Also after completing all labs I went back through and changed them so that they never modify the original picture. I don't want things to get messy so it is nice to return the modified picture instead of changing the original. This comes at a cost of having 2 copies of the picture in memory but RAM is basically free these days and computers are really fast, which is why things like python are so popular I think.
# Function: reduce green to 30% and make everything pink inversely porportional to it's original pinkness
# Params: source pic
# Returns: rose colorized picture
def roseColoredGlasses(sourcePic):
retPic = duplicatePicture(sourcePic)
pixels = getPixels(retPic)
for p in pixels:
r = getRed(p)
b = getBlue(p)
g = getGreen(p)
setGreen(p, g * 0.3)
setRed(p, 255 - r)
setBlue(p, 255 - b)
return retPic
Negative (lab #3)
Don't be negative, unless you're working on lab 3 or talking about the CIA which is a group you should not trust. This function is pretty straight forward, grab the pixels and make them inversely porportional to their current brightness. 255 is the max so I just subtract the current value.
# Function: makes a photo it's negative
# Params: source picture to invert
# Returns: negative photo
#invert image
def makeNegative(sourcePic):
retPic = duplicatePicture(sourcePic)
pixels = getPixels(retPic)
for p in pixels:
r = getRed(p)
b = getBlue(p)
g = getGreen(p)
setRed(p, 255 - r)
setBlue(p, 255 - b)
setGreen(p, 255 - g)
return retPic
Better black and white (lab #3)
Let's face it, when it comes to Mr Rogers...he's a classic man. You can be mean when you look that clean, because he's a classic man. This function was a big of a gimme because we got the multiplier values given to us. It's still cool though.
# Function: better black and white photo
# Params: source picture to convert
# Returns: black and white photo
def betterBnW(sourcePic):
retPic = duplicatePicture(sourcePic)
pixels = getPixels(retPic)
for p in pixels:
r = getRed(p)
b = getBlue(p)
g = getGreen(p)
average = (r * 0.299) + (b * 0.114) + (g * 0.587)
setRed(p, average)
setBlue(p, average)
setGreen(p, average)
return retPic
Bottom-to-top mirror (lab #4)
There were a variety of mirroring functions in this project. This one, mirror vertical, lets you select if you want to mirror top to bottom or bottom to top(with a pyramid top to bottom looks better!). Then it traces from left to right copying the top row to the bottom row until it hits the middle of the picture. There were no real problems with this but it doesn't seem super fast.
# Function: Vertical image mirroring
# Params: 1) an image, 2) bool value for True mirrors top to bottom, False mirrors bottom to top
# Returns: mirrored image
def mirrorVertical(sourcePic, topBottom):
width = getWidth(sourcePic)
height = getHeight(sourcePic)
retPic = duplicatePicture(sourcePic)
yEnd = height / 2
# mirror top to bottom side of image
if topBottom == True:
for x in range(0, width):
for y in range(0, yEnd):
sourcePixel = getPixel(sourcePic, x, y)
destPixel = getPixel(retPic, x, height - y - 1)
sourceColor = getColor(sourcePixel)
setColor(destPixel, sourceColor)
#mirror bottom to top side of image
else:
for x in range(0, width):
for y in range(0, yEnd):
sourcePixel = getPixel(sourcePic, x, height - y - 1)
destPixel = getPixel(retPic, x, y)
sourceColor = getColor(sourcePixel)
setColor(destPixel, sourceColor)
return retPic
Shrink (lab #4)
Shrink is critical if you want to make a few copies of an object to make it seem like there is more variety in your picture when in fact you just used the same picture over and over. Using it to resize a picture one time is an absolute waste though because you should do that in pre-processing with a paint program. But if you mix and match shrink and color filters you could give some good variety to a scene by having a few different kinds of dragons for example or other things like trees. So maybe with 5 base trees and some resize and recolor you could give yourself some real variety in a forest!
# Function: Shrink an image by 50%
# Params: Picture to scale
# Returns: Resized picture
def shrink(sourcePic):
sourceHeight = sourcePic.getHeight()
sourceWidth = sourcePic.getWidth()
retPic = makeEmptyPicture(sourceWidth/2, sourceHeight/2)
destX = 0
destY = 0
destWidth = retPic.getWidth()
destHeight = retPic.getHeight()
for sourceY in range(0, sourceHeight, 2):
destX = 0
for sourceX in range(0, sourceWidth, 2):
if(destX < destWidth) and (destY < destHeight):
sourcePixel = getPixel(sourcePic, sourceX, sourceY)
destPixel = getPixel(retPic, destX , destY)
setColor(destPixel, getColor(sourcePixel))
destX += 1
destY += 1
return retPic
Collage (lab #5)
For my collage lab I was kind of tripping out without an alpha channel to make things look nice. So I upgraded the copy function to accept parameters for red, green, and blue values along with a precision so that you could pick what color should be transparent. It's a green screen for any color with adjustable match precision that lets you put pictures out of bounds to make a great collage! The main thing that I learned from this is that working with images this way really starts to slow things down.
Here is the code below for the collage, along with critical support functions (rotate, shrink, pyCopyA [copy with alpha])
# Function: Shrink an image by 50%
# Params: Picture to scale
# Returns: Resized picture
def shrink(sourcePic):
sourceHeight = sourcePic.getHeight()
sourceWidth = sourcePic.getWidth()
retPic = makeEmptyPicture(sourceWidth/2, sourceHeight/2)
destX = 0
destY = 0
destWidth = retPic.getWidth()
destHeight = retPic.getHeight()
for sourceY in range(0, sourceHeight, 2):
destX = 0
for sourceX in range(0, sourceWidth, 2):
if(destX < destWidth) and (destY < destHeight):
sourcePixel = getPixel(sourcePic, sourceX, sourceY)
destPixel = getPixel(retPic, destX , destY)
setColor(destPixel, getColor(sourcePixel))
destX += 1
destY += 1
return retPic
# Function: Rotate image 90 degrees CCW or CW
# Params: picture to rotate, [CCW == True] rotates CCW, [CCW == Talse] rorates CW
# Returns: rotated picture
def rotatePic(sourcePic, CCW):
sourceHeight = sourcePic.getHeight()
sourceWidth = sourcePic.getWidth()
retPic = makeEmptyPicture (sourceHeight, sourceWidth)
#Work from top to bottom of source picture moving one row at a time left to right
#Copy to source picture working from bottom left corner up one column at a time left to right
if CCW == True:
for sourceY in range(0, sourceHeight):
for sourceX in range(0, sourceWidth):
sourcePixel = getPixel(sourcePic, sourceX, sourceY)
destPixel = getPixel(retPic, sourceY, sourceWidth - sourceX - 1)
setColor(destPixel, getColor(sourcePixel))
#Work from top to bottom of source picture moving one row at a time left to right
#Copy to source picture working from bottom right corner up one column at a time right to left
else:
for sourceY in range(0, sourceHeight):
for sourceX in range(0, sourceWidth):
sourcePixel = getPixel(sourcePic, sourceX, sourceY)
destPixel = getPixel(retPic, sourceHeight - sourceY - 1, sourceX)
setColor(destPixel, getColor(sourcePixel))
return retPic
# Function: python copy with alpha, copy source image to target image and remove transparent pixels
# Params: source image, target image, target x for 0, target y for 0
# Returns: Resized picture
def pyCopyA(source, target, targetX, targetY, alphaR, alphaG, alphaB, precision):
targetWidth = target.getWidth()
targetHeight = target.getHeight()
for y in range( 0, source.getHeight() ): # work from top to bottom
if (y + targetY < targetHeight) and (y + targetY > 0): #Y range check so we can go crazy and not worry
for x in range( 0, source.getWidth() ):
if (x + targetX < targetWidth) and (x + targetX > 0): #X range check so we can go crazy and not worry
sourcePixel = getPixel(source, x, y)
sourceColor = getColor(sourcePixel)
if ( abs(sourceColor.getRed() - alphaR) + abs(sourceColor.getBlue() - alphaB) + abs(sourceColor.getGreen() - alphaG) ) > precision:
destPixel = getPixel(target, x + targetX, y + targetY)
destPixel.setColor(sourceColor)
# Function: Make a collage with dragons, fire balls, and a drone
# Params: none, you can't negotiate with dragons
# Returns: final image
def makeCollage():
finalImage = makeEmptyPicture(1256, 712)
greenScreen = [16, 223, 13] # R G B values for green screen
colorPrecision = 150 # how close the colors has to be to remove the alpha
# Load up all images before we start the party
background = makePicture(getMediaPath("background.jpg"))
title = makePicture(getMediaPath("title.jpg"))
moon = makePicture(getMediaPath("moon.jpg"))
dinosaur = makePicture(getMediaPath("dinosaur.jpg"))
fireballA = makePicture(getMediaPath("fireball.jpg"))
treeA = makePicture(getMediaPath("tree1.jpg"))
treeB = makePicture(getMediaPath("tree2.jpg"))
treeC = makePicture(getMediaPath("tree3.jpg"))
# Do secondary processing to images to generate additional assets
smallDinoA = shrink(dinosaur)
smallDinoB = rotatePic(smallDinoA, True)
smallDinoC = rotatePic(smallDinoB, True)
fireballB = rotatePic(fireballA, True)
fireballB = rotatePic(fireballB, True)
treeC = shrink(treeC)
# Paint the scene
pyCopy(background, finalImage, 0, 0)
pyCopyA(title, finalImage, 350, 50, greenScreen[0], greenScreen[1], greenScreen[2], 200)
pyCopyA(moon, finalImage, -20, -20, greenScreen[0], greenScreen[1], greenScreen[2], 200)
pyCopyA(treeB, finalImage, 0, 0, greenScreen[0], greenScreen[1], greenScreen[2], 200)
pyCopyA(treeC, finalImage, 0, 450, greenScreen[0], greenScreen[1], greenScreen[2], colorPrecision)
pyCopyA(treeC, finalImage, 900, 450, greenScreen[0], greenScreen[1], greenScreen[2], colorPrecision)
pyCopyA(treeA, finalImage, 900, 400, greenScreen[0], greenScreen[1], greenScreen[2], colorPrecision)
# Warning, impending dinosaur attack
pyCopyA(fireballA, finalImage, 800, 50, greenScreen[0], greenScreen[1], greenScreen[2], colorPrecision)
pyCopyA(fireballB, finalImage, 300, 200, greenScreen[0], greenScreen[1], greenScreen[2], colorPrecision)
pyCopyA(smallDinoA, finalImage, -30, 200, greenScreen[0], greenScreen[1], greenScreen[2], colorPrecision)
pyCopyA(smallDinoB, finalImage, 600, 400, greenScreen[0], greenScreen[1], greenScreen[2], colorPrecision)
pyCopyA(smallDinoC, finalImage, 900, -20, greenScreen[0], greenScreen[1], greenScreen[2], colorPrecision)
return finalImage
Red-eye Reduction (lab #6)
Color Art-i-fy (lab #6)
Artify is a filter that basically bins an image. It has cutoff points which apply equally to each color. If you are inbetween certain thresholds then you get a fixed value. My real problem with this one was trying to index across an array of the rgb values and running the filter on them that way. Calling an extra function that does the binning of the colors worked fine though.
Here is the code I had a problem with before switching to a chunkier version.
for p in pixels:
r = getRed(p)
b = getBlue(p)
g = getGreen(p)
for color in [r, b, g]:
if color < 64:
color = 31
elif color < 128:
color = 95
elif color < 192:
color = 159
else:
color = 223
setRed(p, r)
setBlue(p, b)
setGreen(p, g)
# Function: Artify colorize
# Params: Source image
# Returns: Artify picture
def artify(sourcePic):
retPic = duplicatePicture(sourcePic)
pixels = getPixels(retPic)
for p in pixels:
setRed(p, artifyChk(getRed(p)))
setBlue(p, artifyChk(getBlue(p)))
setGreen(p, artifyChk(getGreen(p)))
return retPic
def artifyChk(color):
if color < 64:
color = 31
elif color < 128:
color = 95
elif color < 192:
color = 159
else:
color = 223
return color
Green screen (lab #6) - I used green screen just about everywhere because it's the only way to make a cool collage without transparency. But here is one I liked inspired by the green fields of the show Teletubbies. This used the pyCopyA function above so there is not much new to show here. I did make a new simplified chromakey function for this though based on the course requirements.
# Function: Green screen foreground onto background
# Params: background picture, foreground picture
# Returns: combined photo
def chromaKey(foreground, background):
greenScreen = [16, 223, 13] # R G B values for green screen
colorPrecision = 150 # how close the colors has to be to remove the alpha
retPic = duplicatePicture(background)
pyCopyA(foreground, retPic, 0, 0, greenScreen[0], greenScreen[1], greenScreen[2], colorPrecision)
return retPic
Home made Thanksgiving (lab #7)
Objective: Make a cool thanksgiving card
Everybody seems to want to skip passed thanksgiving and move right on to Christmas. This card should straighten things out. This lab was a bit of a rehash of old functions so the main thing to learn was about text with styles and then because we were going to share this software I had to find setMediaPath() and getMediaPath(). Those seem nice but a teammate was saying it didn't pop up the dialog on her computer. I looked at the traditional ways of getting your working directory in Python but those didn't seem to work well with JES.
The only non-standard function used for my thanksgiving card was pyCopyA shown above from my collage. But here is the christmas card code:
# Function: Make thanksgiving card
# Params: none, you can't negotiate with dragons
# Returns: final image
def makeCardThanksgiving():
greenScreen = [50, 255, 50] # R G B values for green screen
colorPrecision = 100 # how close the colors has to be to remove the alpha
background = makePicture(getMediaPath("fatTurkey.jpg"))
santas = makePicture(getMediaPath("santa.jpg"))
dragon = makePicture(getMediaPath("dinosaur.jpg"))
flamethrower = makePicture(getMediaPath("flamethrower.jpg"))
textA = "Happy Thanksgiving! No, it's not Christmas yet."
textB = "-Matt"
pyCopyA(santas, background, 0, 290, greenScreen[0], greenScreen[1], greenScreen[2], 200)
pyCopyA(flamethrower, background, 140, 510, greenScreen[0], greenScreen[1], greenScreen[2], 200)
pyCopyA(dragon, background, -500, 400, greenScreen[0], greenScreen[1], greenScreen[2], 200)
addTextWithStyle(background, 60, 381, textA, makeStyle(serif, bold, 24))
addTextWithStyle(background, 371, 420, textB, makeStyle(serif, bold, 24))
return background
Line Tracing
It's line tracing time! This one was another gimme where the algorithm is handed to you but you tweak the main control variable("dif" in this case). I like a luminance difference(dif) of 50 for my glasses. Lowering the value seems to give you a fuzzier border where there are more black dots. By comparing the pixel below and to the right of the current pixel as you move across the whole image you can find transition zones. As you move from the outside white border into the blue interior of the glasses there will be a large change in color that will be indicated by the black pixels. Once inside the glasses however there is only a small change through the lenses as they are mostly blue. On the nose piece for example the polycarbonate lense gets thicker around the nose piece (or maybe there is a squishy pad for the nose?) but in either case the large change in material thickness allows less light through so you have a darker section. That leads to a nice outline around the nose section. I really like this effect and finding lines like this simplifies an object a lot so you could do easier tricks like tracking an object I bet.
# Function: Line trace an image based on luminescence of bottom and right pixel
# Params: source pic to trace and minimum difference between abs(core - bot) and abs(core-right)
# Returns: black and white line traced pic
def lineTrace(sourcePic, dif):
width = sourcePic.getWidth()
height = sourcePic.getHeight()
retPic = makeEmptyPicture(width, height)
#white = makeColor(255,255,255)
#black = makeColor(0,0,0)
xMax = width - 1
yMax = height - 1
for x in range(0, xMax):
for y in range(0, yMax):
if (y + 1 < yMax) and (x + 1 < xMax):
corePixel = sourcePic.getPixel(x, y).getColor()
botPixel = sourcePic.getPixel(x, y + 1 ).getColor()
rightPixel = sourcePic.getPixel(x + 1, y).getColor()
coreLum = corePixel.red + corePixel.blue + corePixel.green
botLum = botPixel.red + botPixel.blue + botPixel.green
rightLum = rightPixel.red + rightPixel.blue + rightPixel.green
if (abs(coreLum - botLum) < dif) and (abs(coreLum - rightLum) < dif) :
setColor(retPic.getPixel(x,y), white)
else:
setColor(retPic.getPixel(x,y), black)
else:
setColor(retPic.getPixel(x, y), white) # just make it white if we hit the end, no big deal
return retPic
Subscribe to:
Comments (Atom)
















