Category Archives: News

Wired’s Article on the “Web is Dead”

I had to concur with Wired’s article about the web dying posted a couple of days ago. After reading “both sides” as well as some of the comments, I wanted to offer my brief thoughts regarding the web, open standards, and the paradigm shift to mobile. While I think HTML is and has always been a markup that is valuable in understanding how the web browser displays information, I have to agree that the Technological shift to mobile has completely changed how obtain information.

Consider application use today. We log on with our devices to Pandora, Netflix, Amazon, Facebook,  Twitter, or Hulu – a very small monopoly on their respective functions. Yes, you can use your web browser to access these services, and devices even have their own web browsers built in. But there are so many “apps” these days that one can sort of foresee that learning how to make mobile apps that deal with Java, Flash Lite, or Objective C will be more valuable than knowing how to make web apps in HTML, Flash, and AJAX/Javascript.

Thankfully, good practice in general application development has always been to have the front end a simple output of the backend. The SQL to PHP/JSP/Ruby to frontend will continue though the frontend will just change. The web, being 18 years old now, changed slowly. So in a way I feel web developers have seen this change coming.

As an online game developer, I see how games now connect to dozens of API (such as Facebook) to become an “app” on the service. These services attempt to connect players to social networks which now can be access from someone’s phone. I think online “web” games will experience a shift into “apps” integrated into social networks and streams – you will see games ran on different mobile, console, and tablet devices using the same backend. While this technique has been done by some gaming companies, it’s only been done recently. Watch and see – very soon you’ll see multiplayer games where IPhone players can play chess/checkers/poker against browser players. Eventually, as mobile devices increase in memory and performance, I bet you’ll see some more bandwidth intensive games (such as shooters or real time strategy games) be multiplayer across platforms.

I’m sure there’s much that can be said about how UI/Product designers will be affected. Those job holders will have to increasingly be aware of all of the affordances and constraints of various devices. Yes, making a product for 30 different mobile phones and devices is WAY more difficult than making a web application work for 3 or 4 different browsers and resolutions.

Anyway, I think the lesson and proper reaction is to be thankful of XML, JSON, and general HTTP request modeling that front ends like Flash, Java, Objective C, and all of the open source libraries that allow parsing with the various backend server structures and clouds. And above all we should be thankful for Tim Berners-Lee and his decision to keep the Internet open so we can adapt to these changes – let’s just hope the web remains neutral :-).

What I don’t like about the Wired article is their “debate” on who is to blame for the end of the web as we know it. Should we blame Google, Apple, and Microsoft or should we blame ourselves? My concern isn’t on the debate, my debate is on the principal of sides in the debate.

I think a wiser statement than saying the web is dying is to say the web is evolving. The web has simply evolved to this state. I approach the deterioration of web pages as a natural and predictable phase of the Internet’s evolution based on the capabilities of hardware today. I guess a debate does exist in where giants such Apple releasing products like the IPad, IPhone, or most relevantly the App store accelerated the “virtual selection?” Well, I don’t know how to answer that question, but I think it’s unquestionable that Apple, Google, and Microsoft haven’t helped nurture and shape the Internet to its current state. Who knows? If Google had released that Chrome OS they hinted about earlier this year (that I predicted in 2005 would be an all browser HTML and JavaScript based operating system ) before Apple launched the App store then maybe I would be saying HTML 5 is the future. Maybe now it’s too late and it would be better to go with an Internet OS that’s all app based… Or maybe there’s room for both the web AND the Internet.

Scoring The Oregon Trail


A unit of measurement is defined as “any division of quantity accepted as a standard of measurement or exchange.”[1] Units of measurement are critically important in associating meaning with quantities. One gallon of milk, one-hundred yards on a football field, or a thousand pages in a book are examples of a numerical multiplicity of units that have definite associations with their respective mediums. People identify with a gallon, yard, or page because each unit holds concrete meaning and magnitude.

In the context of video games, measurements of success are determined by evaluations of player experiences and presented as quantifiable units. Unfortunately, the units used in video games are often times simply “points” that are calculated from multiple dimensions of mathematical operations and offer little or no verisimilitude to the fiction in the game. The educational Apple II game The Oregon Trail, played in the late 1980s and early 1990s, while offering fictionally relevant death-screens, exemplifies the mistake of using incoherent and abstract “points” to evaluate the success of the player.

In this essay I will first briefly mention the method of score calculation and evaluation by non-digital games and the significance of their choices of score computation and units. Afterwards, I offer my explanation of why video games tend to stray away from the mechanics of score computation offered in non-digital games. I then introduce the Oregon Trail and the game design choices that relate to the pioneer life of the 1840s and 1850s. Next, I outline the problems of the evaluation mechanism of the Oregon Trail and how it regrettably detaches itself from the fiction. Lastly, I propose an alternative means of player assessment in the Oregon Trail and its benefits.

Part 1: Scoring without Computers

There are people who have careers in studying sports’ statistics. These statisticians predict un-played games via the analysis of played ones. The sheer quantity of logged statistics is overwhelming. In addition to points, basketball has a multitude of statistics for teams and players such as assists, rebounds, blocked shots, and turnovers. Baseball statistics consist of pitch count, balls, strikes, bunts, errors, batting average, earned-run-average, on-base-percentage, and the list goes on. However, while all these numbers are tracked, the only numbers that matter are runs, goals, or points. If a team has the most runs, goals, or points in a game then they are declared the winner. In these non-computers based games, the way of incrementing the unit of score of is typically single or low dimensional. For soccer and hockey, this causation is passing the ball/puck past the goal line to score a goal. For baseball the causation is stepping on the plate to earn a run.

Occam’s razor applies in scoring in non-digital games. Giving bonus points to teams that score a bicycle kick or subtracting points for missed questions in a trivia game would add unnecessary complexity. In the context of score culture, players discuss or use secondary units of metrics in games for comparison. Obviously, there is a direct proportion to secondary statistics to primary statistics. Baseball teams with better batting averages tend to win more games. Sports fans use secondary statistics as reasons why their favorite team is better than the other. But those interpretations are always subjective and speculative. At the end of the day, the primary statistic is what matters. Who cares if the team’s batting average is the highest in their division if they aren’t making the playoffs? As the late German soccer coach Sepp Herberger famously told his team, “The ball is round, the game lasts 90 minutes, everything else is… theory.”[2] There is no justice in giving extra points to teams with higher batting averages either. Games that apply weight to specific secondary statistics risk ruining the balance of the game and cultivate an unanticipated culture of community assessment.

Part 2: Cultural Effects of Assessments

Evaluation is an embedded part of our lives. Capitalism, by definition, encourages competition and evaluation. Exams in schools rank our performance with tests and assignments. Importantly, test grades are often calculated with a single dimensional computation: percent of questions answered correctly.

More important, especially in relation to video games, is the concept of score inflation and deflation. While a test grade of C implies average, a class full of D students may consider a C above average. In games the same process applies. Both game designers and teachers attempt to predict average results of a game in order to assess player experiences. However, game designers that fail to predict accurately force players into creating community-based assessments. Scoring five goals in a soccer game is considered abnormally high because of the outcome of previous games. Game designers don’t have the luxury of previous gameplay data; consequently, most video games don’t have built-in interpretations of the players’ experience. Comparisons are often absent in video games and the gaming community itself turns the scores into meaning. Interpretations are usually best left to the community; games sometimes catalyze community comparisons with scoreboards. “You’ve made the high scores!” is the typical embedded evaluation of a player’s experience. Yet, what typically happens is that the community ignores the points and focuses on the experiences themselves as sources of evaluation. This phenomenon is evident in many action games via conversations players have with one another in society. “I just beat the second boss!” or “Oh yeah? I reached the third boss and killed him with only a knife!” The experiences become the metrics, and the units of measurement used in the game are disregarded.

Most of the time failure is attributable to the incoherence between the metric and the fiction. “Points” is a commonly used yet linguistically abstract term with little relevance to the fictions it represents.

These points are often calculated in a meaningless way. Games tend to have a multitude of methods in computing score which have little or no coupling to the player’s experience. In Pac-Man the player amasses points throughout the levels by eating ghosts and collecting dots; however, the Pac-Man competitive culture became more about “What fruit did you get to?”; a reference to the various types of fruits the Pac-man encounters in later levels[3]. The fruit became more significant to the players than the points because the fruit had a tighter coupling to the player’s experience.

Besides encouraging replay and competition, one of the reasons video games have propensity to add loosely-coupled, computationally complex scoring components is the obvious ability of the computer to process and store information. Non-computers based games left storage upon the players rather than on the system itself. The popular game of Tic-Tac-Toe is a perfect example. What if Tic-Tac-Toe had a rule that if a player waits more than 3 seconds on their turn, then the other player is declared the winner? One can only imagine the arguments players would have over the amount of time a player waited. The choice of leaving this rule out is not coincidental. Requiring the use of a stopwatch during play would be ridiculous. However, the computer provides many affordances for games; one of those affordances is the ability to track time length. Consequently, this additional rule can be added to the game without destroying the physical experience of playing the game. But in game design, the consensus has generally pointed toward justice and balance rather than coherence; consequently, multiple added rules can deter the meaningfulness of the score.

For instance: a player may kill fewer 100 aliens in a first person shooter but could receive a score of 3104 depending on the number of headshots, amount of ammunition used, etc. Unfortunately, while this approach may be seem competitively fairer, the unfortunate fact is that players have a difficult time understanding their score during gameplay when score computations are multidimensional. Only the computer, due to its procedural nature, can keep track of the scores.

Another affordance of computers over humans is the abilities of processing and memory. Humans playing multidimensional scoring games rarely know how much their score will increase because of human mind limitations in determining rationality of the numbers. Games that dispense thousands or even millions of points for random achievements are easy examples of games in which the points’ computation lose meaning. Unfortunately, games that fail to offer a coupled, low-dimensional scoring mechanism risk ruining the player’s ability to improve their performance. A player is expected to master a game after receiving feedback and modifying future game decisions based on that feedback. Yet, multi-dimensional feedback is nearly always more bias due to the selections of the weight of each attribute in the score formula. Score justice is a consequently unattainable achievement due to score bias. Especially for fiction games, implementing every possible parameter into a final score is typically an impossible task; consequently, designers must choose specific attributes they deem important. This process of selection creates the score bias. Additionally, score justice is circumvented via “score loopholes”; players may find strategies of racking a high score by performing a specific gameplay process repeatedly. An example of a score loophole is found in the 1985 MECC game The Oregon Trail which I discuss later. The Oregon Trail is an example of a game that attempts to justifiably evaluate player performance with a confusing, zero-coupled, multidimensional scoring computation.

Part 3: The Oregon Trail (1985)

The Oregon Trail, originally conceived in 1971 and produced by MECC in 1974 before released to public in 1985, is a heavily fiction based game about pioneer life on the real Oregon Trail in 1848. The player undertakes the role of a banker, farmer, or carpenter that leads a family across North America on the Oregon Trail. Starting in Independence, Missouri, where most pioneers began their migration[4], the player manages food, clothing, oxen, money, and hunting bullets throughout a long and eventful voyage to Oregon.

The Oregon Trail has tight couplings with its 1850s fiction during most of the playing experience. Nearly every event and action has coupling to the fiction presented in the game; consequently, the game became a very successful teaching tool in elementary schools[5]. Some events include members of the wagon party obtaining various sicknesses or injuries or thieves randomly come at night to steal supplies; these events were faced by travelers along the real Oregon Trail[6]. Choices are also relevant: at the cost of health, food rationings and wagon pace can be modified in dealing with shortage of supplies. Even the character death results are particularly verisimilar. In addition to the removal of the wagon member from the group, the player has the option of placing custom engraved tombstones at the place of the death. Later players will have the option of viewing these tombstones on their own journeys.

However, strangely enough, The Oregon Trail fails miserably in the win-state department. While most explanations or outcomes of the game are relevant toward the life of the pioneers in 1848, the final screen simply shows a numeric score; the result of a series of irrational mathematical operations.

From Wikipedia: “Points are awarded according to a formula weighted by the profession chosen (points are doubled for a carpenter and tripled for a farmer), the number and health of surviving family members, remaining possessions, and cash on hand.”

Surprisingly, the only difference between vocations in The Oregon Trail is the bonus multiplier associated at the end of the game. Instead, the game should have offered different affordances for each role. A banker should have better bargaining skills in buying items and trading, farmers should be able to keep the oxen alive longer and have better hunting skills (in The Oregon Trail hunting mini-game the character is unable to carry more than 100 pounds of killed animals back to the wagon), and carpenters should be able to repair broken parts more quickly than the other two roles. Instead, The Oregon Trail chose to simply alter the amount of starting money for each role and change the bonus multiplier. The Oregon Trail lazily uses vocation selection as a “difficulty mode” selection.

The scoring’s multidimensional complexity, in addition to confusion, affixes gameplay bias. The game says that an “Odysseus” pioneer with more food at the end of the trail is ranked higher than a pioneer with less food but more surviving family members. The Oregon Trail argues that a wagon leader’s goal is to balance supplies and money by the end of the trail… and to also be a farmer. The Oregon Trail subjectively applies value to specific vocations and decisions instead of linking the performance to the pioneer narrative.

Part 4: Narrative Evaluation Alternatives

A better alternative to the irrelevant scoring system used in The Oregon Trail is revealing the player’s family/party upon outcome after settling in Oregon. Instead of attaching a number to the player’s performance, simulate the post-journey result of the family. Historically, settlers arriving in Oregon sent letters east to other families and friends describing their happiness and state in Oregon[7]. In replacement of showing score calculation, the game could display a letter sent by the player’s family to a fictional family or friend back home in Independence, Missouri describing their state in Oregon. Depending on the resulting supplies and cash, the letter is written with a different tone and result.

Money is dispersed at the beginning of the game but not earned through the adventure. Sequential checkpoints increase the price of supplies; consequently, players stock supplies early on to save money. If the player has no money at the end of the trip, the player’s letter home can contain information regarding how “money has been tight” and that their house is “small” and their kids attend “poor” schools. The more money the player saves the better the schools and larger the land they own. If the player keeps less clothing the letter can read how winters have been tough. If they kept oxen the letter can write how they’ve been able to use their oxen to travel to town to buy supplies such as clothing to manage with the winter. Depending on the vocation of the player the letter can read differently as well. If the player is a banker, depending on the other variables he or she has found a specifically ranked job from “unemployed” to “President of a National Bank” (Congress passed (1863) the National Bank Act, which provided for a system of banks to be chartered by the federal government). The Oregon Territory was acquired in 1848, the year the player in the game begins the journey and Oregon became part of the United States in 1859. Farmers could write in the letter their poor or strong harvests. Carpenters can also become architects with varying positions.

This custom “alternative ending” approach would help fix the deficient narrative of the end game. Pioneers in the 1850s journeyed to Oregon for a better life[7]. Integrating those dreams and aspirations with the game adds agency. In reality, all the attributes and parameters of the trip are still being incorporated into the final assessment of the player’s experience; however, instead of showing a number, the game shows a narrative in the form of a letter.

In relation to the competitive aspect of the game, narratives do not provide the ability for unbiased comparison. The Oregon Trail from 1985 has a high score table feature to rank players according their numeric score. An overall rating is given to each player such as “greenhorn”, “adventurer”, or “travel guide”; a value determined by the score. Since my
“letters back home” proposition employs narratives that can only be subjectively evaluated, a ranking system based on those narratives is unfeasible as different players may view varying narratives as more “successful” than others depending on their personal definitions of success and failure. However, instead of eliminating the “Oregon Trail Top 10” scoreboard, the scores should be replaced to a simple number of days taken to reach Oregon. There is already a direct proportion in days taken to reach Oregon and the numerical score. The better the player manages their supplies and health, the faster the player will reach Oregon. In fact, there is a more direct proportion to the management of the entire experience in days taken than there is in the current formula for score calculation. The current scoring formula only applies to the result of the trip rather than the progress. For instance, suppose a player travels the trail with “good” health. Consider if player’s health at the very end of the trip, right before reaching Oregon, falls to “fair.” The player’s score will be comparatively lower than that of a “good” health ending player even if the “good” health ending play held “fair” health through entire trip. The Oregon Trail, as with all games with multidimensional scoring systems, suffers from these unexpected “score loopholes.” The narrative endings solve the problem of point bewilderment and loose-coupling and as well as encourages varying gameplay experiences to see different endings.


Evaluations tend to be quantitative instead of qualitative. Consequently, their evaluations are often ignored by players. Using abstract and extraneous formulas and bonuses damage coherence, agency, and ultimately immersion. To retain verisimilitude in player experiences, performance evaluations necessitate low dimensionality and fictional relevancy.

The following is a prototype of an example end screen letter a player may see. Modify the inputs to see varying outcomes.


1.    P. University, “WordNet Search Dictionary,” Book WordNet Search Dictionary, Series WordNet Search Dictionary, ed., Editor ed.^eds., Princeton University, 2006, pp.

2.    “Sepp Herberger: Biography,”

3.    “Pac-Mac: Classics Reunited,”

4.    M. Trinklein, and S. Boettcher, “Independence,”

5.    W. Jolley, C. Fujiyama, S. Alami, and L. O’Neal, “The Trail as a Teaching Tool,” 2003;

6.    M. Trinklein, and S. Boettcher, “The Oregon Trail: Hardships,”

7.    M. Beaver, “The Oregon Trail,” 2001;


Why you keep getting a computeSpectrum Security Error

Probably one of the worst and most frustrating bugs.

Turns out computeSpectrum will NOT work if ANOTHER Flash is using audio. Meaning if you have a Flash that using SoundMixer.computeSpectrum one of those super annoying security sandbox runtime errors will pop up on the browser if YouTube, GMail, or any other Flash using audio is opened.

The fix? There is no fix. No, security.allowDomain won’t work. No, placing crossdomain.xml all over your server won’t work either. The only thing one can do is catch the error (or use areSoundsInaccessible()) and have something else happen in between. This bug is especially annoying for game developers like me who are working on a game that uses the computeSpectrum to generate content…

Grrr… So annoying.

Encourage Adobe to fix this by voting for it.

Ethics and Online Privacy

Online, privacy barely exists. Every page, email, and instant message can be intercepted, manipulated, and/or logged. The millions of Internet users browsing the web at this very moment may be surprised to find that their online information is not secure. The Internet’s greatest strength in accessibility is its greatest weakness in security. Web servers have the ability to store information about every visitor. In order to provide services to their customers, websites must store information on their customer’s machines as well as on their databases. Collecting data is an essential function of web applications; unfortunately, the majority of data collection practices used today is unethical because their users are not informed properly of what, how, and why their information is being gathered.

Digital data is virtual; ironically, the information does not exist anywhere other than through bits and bytes stored electronically. Unlike a letters that exist on a physical sheet of paper, digital media can be transmitted, duplicated, or modified in microseconds. Online data has the same characteristics. One analogy to explain how the web works with data is by a boy and his father playing catch with a baseball. The child, the client, throws a baseball to the father, the web server. The father (web server) catches the baseball. On the baseball there is writing that the client child wrote that the server child can read. The server child then erases the writings on the baseball and responds to the client’s writings with its own and throws the ball back. Unfortunately, the boy is very young and illiterate and must have his mother (the web browser) interpret what the father wrote. For example, the boy plays catch with and throws a baseball that says “Give me’s homepage file” (which his mother wrote) to reads the message and writes the HTML file on the baseball and throws it back to the boy. The boy catches the baseball and asks his mother to read it for him. The mother (web browser) checks the file to make sure it does not have any malicious “writings” (code) and then reads it to the boy. The mother can also remember data (cookies) for the boy that the Dad wants the boy to write down on the next baseball.

This analogy may seem odd; however, it is a way of understanding how the Internet works. The baseball represents the data being passed back and forth. Unfortunately, that baseball can be “intercepted” on its way from one direction to another. If the writings on the baseball are not encrypted then that person reading the baseball will have access to that data. Additionally, there is nothing stopping the father (the web server) from transmitting the boy’s data to some other person.

Web servers need to use the pieces of data stored by “cookies” to deliver products; however, the way that web servers use cookies pose significant ethical concerns. Cookies are a form of invisible data gathering; most users have no idea that cookies are being stored on their machine. The use of web browser cookies by websites is ethical and essential to the web. The problem cookies pose is the ability for them to be misused and abused. Most websites store enough information to isolate individuals. That cookie data has the potential to be compromised if it is stored on users’ PC in unencrypted form. Any user of that PC can read that data. Websites should be designed to let users know what and how that information is being stored as well as use encryption to protect data. There is a standard currently released on the web called a Privacy Policy document. The Privacy Policy is a detailed description of what information a website is collecting. Currently in the U.S.A., only websites that target or knowingly collect information from children under the age of 13 must have a Privacy Policy document posted on their website. This law, known as Children’s Online Privacy Protection Act (COPPA), requires users under the age of 13 to obtain parent approval before registering with a website. While this act is well-intended, most websites (especially small ones) do not have the resources to verify parent signatures.

There are legitimate counter arguments to enforcing Privacy Policies documents. The simple enforcement of ensuring what Privacy Policies documents say and what the website actually collects is nearly impossible with the vast amounts of websites in operation. A second problem is that the actual regulation is impossible as the government does not have the resources to verify that the web server does not or does store information listed on a privacy policy. Lastly and most importantly, very few users actually read privacy policies on websites. A study done at Carnegie Mellon University [1] finds that privacy policies in U.S. sites are on average 2,500 words and takes on average 10 minutes to read (thus costing billions of dollars per year in opportunity cost). The study concludes that because Privacy Policies documents take so long to read, and are difficult to understand, most Internet users ignore them.

While cookies are stored on my machine, data I enter on web forms are stored on web servers. Since Privacy Policy documents are so rarely posted, followed, or read, how can I be assured that my credit card information I entered in a web form to purchase a product won’t be kept by the webmaster? There is no way for me to know how long it’s stored and who has access to view it. Credit card and social security numbers are examples of sensitive information that a criminals, rather than companies, seek. Identity theft is so rampant [2] in the United States that 221 billion dollars is lost by business every year. Identity theft crime has hurt the online economy according to a survey done of online shoppers by Harris Interactive for Privacy & American Business and Deloitte & Touche LLP [3]. 64% of respondents from their survey have decided not to purchase a product from an online company because they weren’t sure how their information would be used.

Currently, solutions are being offered via the use of noteworthy and famous third party vendors such as PayPal; however, many websites choose to store credit card information themselves. Unfortunately, these sites are often unprotected from hackers and criminals seeking to steal the identity of one of their customers. An ethical solution is to have a government regulated list of authorized transaction vendors (like PayPal or Google Checkout) that online transactions must use. The use of any private system should be illegal unless it is on the government’s list of approved transaction middlemen.

While cookies are an important part of online privacy, a report [4] concerning privacy in the European Union mentions that protecting personal data from intrusion is not the only part of protecting privacy. Legaresi reports that “Personal data protection has absorbed most of regulatory efforts devoted to privacy, on the wrong assumption either that it coincides with privacy protection or that it has the same dignity of privacy protection. The misunderstanding of the concept of privacy has determined a devaluation of its value and a lower level of protections of some of its relevant sides, like solitude, anonymity, intimacy and personality [4].”

Legaresi is correct in his analysis of data protection versus visibility protection. Social networking websites are an example of where data could be digitally protected yet not private. Many users list their phone numbers and addresses on these websites which, unless privacy options are available and applied the social networking site, could be accessed by anyone on the social network. In the work environment, this fact is especially important. Many employees post pictures on social networking websites that may be seen as inappropriate by their employers. Tiffany Shepherd was fired from her job as a high school biology teacher after pictures of her in a bikini were found [5] on her social networking site.

I don’t think Tiffany should have been fired from her job as her pictures were not crude or in bad taste; however, I do respect the right of the school to fire a teacher they believe is poorly representing the school. A New England Patriots cheerleader was fired after she posted to a photo of herself at a party next to a passed out man covered in offensive markings [6]. In this example, I think that the Patriots have every right to fire her, as not only is she poorly representing the football organization, but they are a private company and should be able to fire anyone for any reason other than race, gender, religion, disability, or sexual orientation. There are arguments against firing employees without direct cause. Many believe that what they do outside of the work place is their business. Additionally, company rules are not always transparent to employees. However, private companies need this right to determine who can work in their company. For example, if a male employee had an affair with his boss’s wife, would the boss not be able to fire the male employee because the affair happened outside of work? Of course not! The boss, like all company bosses, should have the right to fire people for events happening outside of work. So referring back to the Tiffany Shepherd incident, she along with anyone else can control what their employers see by simply not posting controversial media on their profile pages.

Similar moral questions arise in public schools. Schools typically have web filters to prevent users from access certain websites. In many schools, every page a student visits, whether it is a ESPN, EBay, or Facebook, is immediately logged and reported to school administrators. While this oversight seems comparable to companies, I don’t think public schools share the same ethical standards. The difference is that employees today have the expectation of using some of their computer time for personal reasons since they often have a company email account and/or are on the computer all day. High-school students, who use computers sparingly during class for research purposes should not be using that time to send personal emails or to visit EBay.

In current practice, Social networking privacy is almost an oxymoron. On the one hand, social networking websites offer services to connect users together by sharing information. On the other hand, users prefer to restrict the sharing of information to certain parties. One solution that some social networking sites such as Facebook have implemented is privacy controls. Users (employees, students) can select which data is viewable to other users (i.e. employers, teachers). But where does the line between personal responsibility and privacy fall? Concessions need to be made on both sides. I need to realize that what I post on a social networking site is no longer private and social networking sites should, but not be obligated to, offer privacy controls. The reason sites social networking sites should not be obligated to provide privacy controls is because regulation is nearly impossible. Many argue the opposite, that social networking sites should be obligated to have visible, explicit, and easy to use privacy controls. However, the only way regulatory agencies would be able to know if users’ information is not being shared with unwanted users is by either approving website code or by monitoring user accounts. Either is made increasingly difficult as new versions of social networking sites are consistently released.

I think this problem is solving itself. Social networking sites compete for users; ones that offer more services such as privacy controls are more attractive to customers. While this capitalistic perspective may seem speculative, the online statistics website backs up this claim by ranking MySpace and Facebook, two social networks that offer privacy controls, as the most popular social networking sites in the United States.

Sharing personal data with third parties is a logistical privacy problem for these social networking websites. In order to show relevant advertisements to a specific user, websites analyze specific user information to show ads corresponding to their data. For example, if a user’s marital status is listed as “single” on Facebook, that user may see a web advertisement for a dating website. Or if one of the user’s favorite bands is Coldplay they might see a banner ad for a Coldplay concert. As long as these websites do not share identifiable information to the companies serving the ads and also notify the users that they are sharing his or her data with other companies, then their practice is ethical. A counter argument is that these sites should ask permission from a user. Some applications do request from the user permission to send information anonymously to a statistics service. However, requesting permission could hinder the experience of using their product. I personally think as long as a service is sending my information anonymously, the service is ethically OK. Whether or not regulation or enforcement of anonymity is possible is a different question.

Another ethical dilemma is where or not companies can sell user or users’ data to marketing companies. For instance, TV networks would love to know trends in what users are listing as their favorite TV shows. Facebook and MySpace can and do provide empirical data to companies. While many dissent this practice as their information is technically being distributed to a third party without their permission, I don’t find it morally wrong as long as the data being sent to companies is sufficiently large to support individual anonymity.

The Internet was built to help share information rather than hide it. Since websites require information to deliver information, they are ethically bound to inform their users in an explicit, non-confusing way exactly how information is being kept. There is no one solution to enforcing websites to uphold this moral standard. Protecting privacy online is a multi-faceted problem that involves both regulation and lasses-faire policies. Nevertheless, the best weapon against privacy threats is the realization of online privacy vulnerability.

1. N. Anderson, “Study: Reading online privacy policies could cost $365 billion a year,” 2008;

2. “Identity Theft Statistics,”

3. “Vague online privacy polices are harming e-commerce, new survey reports,”

4. N. Lugaresi, “Principles and Regulations About Online Privacy: “Implementation Divide” and Misunderstandings in the European Union ” Book Principles and Regulations About Online Privacy: “Implementation Divide” and Misunderstandings in the European Union Series Principles and Regulations About Online Privacy: “Implementation Divide” and Misunderstandings in the European Union ed., Editor ed.^eds., 2002, pp.

5. “Tiffany Shepherd fired for wearing Bikini?,” 2008;

6. “Patriots Cheerleader Fired over Facebook Swastika Photo,” 2008;,2933,448044,00.html.

Why doesn’t System.Capabilities class have browser property?

Can someone explain why Adobe put in the feature:

trace(Capabilities.os); // gets the operating system

and not something like

trace(Capabilities.browser); // gets the browser... this property doesn't exist unfortunately...

On a game project I’m working on, the game uses key combinations such as CTRL+Z and CTRL+Y for certain features. Unfortunately, combo keys using CTRL key don’t work in Internet Explorer (but they do work in other browsers). What I want to do is detect if the user has IE, then change the key combinations to SHIFT+Z and SHIFT+Y. The only way to determine the browser is by using a server side language or Javascript to pass in the browser into Flash… grr….

Does anyone know if there’s a way to use ExternalInterface to send the keys? I don’t want to depend on Javascript being on a page or specific Flash vars be sent to the user… but I may have to do so.

Update 10/17/08:

I found a way to detect for IE.

Capabilities.playerType; //Returns ActiveX if in IE. 

A string that indicates the type of player. This property can have one of the following values:

  • "StandAlone" for the Flash StandAlone Player
  • "External" for the Flash Player version used by the external player, or test movie mode..
  • "PlugIn" for the Flash Player browser plug-in
  • "ActiveX" for the Flash Player ActiveX Control used by Microsoft Internet Explorer

AS3 EventManager 1.23: cleanUp method added

A major reason why some users of my EventManager class may see more memory leaks being shown than actual memory leaks is because EventManager stores all the listeners in a dictionary. If we remove a display object then technically (if weakReference is set to true) the listeners are removed for the DisplayObject and its children. Until today, EventManager required coders to manually remove every listener they created with EventManager even if Adobe’s GC actually removed them.

As much as I like the listeners being auto removed, I feel it’s better in practice to manually remove them so you know what’s going on while you code. If you always depend on that to happen then you might find yourself with memory leaks.

I never thought to actually have EventManager check each object to see if it’ll trigger the Event EventManager says it has. Therefore, I added cleanUp which will go through all of the listeners and remove the ones that don’t trigger anymore. This new method means that the EventManager should be more accurate in what’s going on (especially if you call a deepTrace now). It won’t work all the time of course (multiple listeners in one object will still be reported as a leak unless garbage collection takes care of it or you use EventDispatcher’s removeEventListener to remove them).

Anyway, check it out.