Category Archives: Technology

How to Convince Someone to Be Your Technical Cofounder

Throughout the past years, I have been solicited by many friends, friends of friends, and complete strangers to be a technical cofounder/developer on a new project.

For almost all of the projects and startups that have been presented to me, I have declined.

Here are the main reasons why:

Almost always, the offer from the startup goes something like, “You create us an app for free, and we will give you X% of our company/profits in return.”

I think the founders of these startups underestimate the amount of the work it takes to build the “app” as well as valuing the X% equity.

The X value could be 100 and still developers would decline to work for free. Startups need to evaluate the way they propose working relationships by providing a clear explanation of their business and why it is worth the risk of a developer.

Here are some questions that every team needs answered and communicated to potential technical cofounders.

1) What does the current status of my startup bring to the table?

Could this developer duplicate this startup themselves? Does the ideas and execution of the business rely on specific expertise of the existing team members? In other words, what makes your current team so special and unique? I can’t stress enough how often the answers to these questions are not offered.

2) What is the problem my business is trying to answer?

This question is almost always overlooked or not communicated effectively. How do you know if this is a real problem? How much does this problem cost an average customer? Who/what says there is a problem, and why does their opinion have any credibility?

Who says that your business has a solution and does the person making the claim hold any credibily?

3) How close are you to closing business with your first customer?

If the answer to this question is more than 3 months, it will be incredibly difficult to convince a technical person to build something for free. You have to remember that decent developers have a very high opportunity cost. They could be working on any number of paying projects/jobs. Every hour they spend working on your project is costing them their hourly rate. It doesn’t mean that your idea isn’t good, it just might not be good enough to be worth the risk.

4) Why can’t I pay this developer?

You need to have a legitimate reason why you can’t pay a developer any amount for their time. Remember, it is very likely that your time is NOT equal to that of a developers. Just because you are working for free doesn’t mean that your opportunity cost is equal to someone else working for free. Could you raise money and pay the developer some discounted rate plus equity? Developers will often pay more attention to combo offers of money and equity as it lowers risk yet.

Startups should approach potential technical confounders the same way they do investors. While investors give you
dollars, developers give you
dollars in the currency of time.

Wired’s Article on the “Web is Dead”

I had to concur with Wired’s article about the web dying posted a couple of days ago. After reading “both sides” as well as some of the comments, I wanted to offer my brief thoughts regarding the web, open standards, and the paradigm shift to mobile. While I think HTML is and has always been a markup that is valuable in understanding how the web browser displays information, I have to agree that the Technological shift to mobile has completely changed how obtain information.

Consider application use today. We log on with our devices to Pandora, Netflix, Amazon, Facebook,  Twitter, or Hulu – a very small monopoly on their respective functions. Yes, you can use your web browser to access these services, and devices even have their own web browsers built in. But there are so many “apps” these days that one can sort of foresee that learning how to make mobile apps that deal with Java, Flash Lite, or Objective C will be more valuable than knowing how to make web apps in HTML, Flash, and AJAX/Javascript.

Thankfully, good practice in general application development has always been to have the front end a simple output of the backend. The SQL to PHP/JSP/Ruby to frontend will continue though the frontend will just change. The web, being 18 years old now, changed slowly. So in a way I feel web developers have seen this change coming.

As an online game developer, I see how games now connect to dozens of API (such as Facebook) to become an “app” on the service. These services attempt to connect players to social networks which now can be access from someone’s phone. I think online “web” games will experience a shift into “apps” integrated into social networks and streams – you will see games ran on different mobile, console, and tablet devices using the same backend. While this technique has been done by some gaming companies, it’s only been done recently. Watch and see – very soon you’ll see multiplayer games where IPhone players can play chess/checkers/poker against browser players. Eventually, as mobile devices increase in memory and performance, I bet you’ll see some more bandwidth intensive games (such as shooters or real time strategy games) be multiplayer across platforms.

I’m sure there’s much that can be said about how UI/Product designers will be affected. Those job holders will have to increasingly be aware of all of the affordances and constraints of various devices. Yes, making a product for 30 different mobile phones and devices is WAY more difficult than making a web application work for 3 or 4 different browsers and resolutions.

Anyway, I think the lesson and proper reaction is to be thankful of XML, JSON, and general HTTP request modeling that front ends like Flash, Java, Objective C, and all of the open source libraries that allow parsing with the various backend server structures and clouds. And above all we should be thankful for Tim Berners-Lee and his decision to keep the Internet open so we can adapt to these changes – let’s just hope the web remains neutral :-).

What I don’t like about the Wired article is their “debate” on who is to blame for the end of the web as we know it. Should we blame Google, Apple, and Microsoft or should we blame ourselves? My concern isn’t on the debate, my debate is on the principal of sides in the debate.

I think a wiser statement than saying the web is dying is to say the web is evolving. The web has simply evolved to this state. I approach the deterioration of web pages as a natural and predictable phase of the Internet’s evolution based on the capabilities of hardware today. I guess a debate does exist in where giants such Apple releasing products like the IPad, IPhone, or most relevantly the App store accelerated the “virtual selection?” Well, I don’t know how to answer that question, but I think it’s unquestionable that Apple, Google, and Microsoft haven’t helped nurture and shape the Internet to its current state. Who knows? If Google had released that Chrome OS they hinted about earlier this year (that I predicted in 2005 would be an all browser HTML and JavaScript based operating system ) before Apple launched the App store then maybe I would be saying HTML 5 is the future. Maybe now it’s too late and it would be better to go with an Internet OS that’s all app based… Or maybe there’s room for both the web AND the Internet.

Ethics and Online Privacy

Online, privacy barely exists. Every page, email, and instant message can be intercepted, manipulated, and/or logged. The millions of Internet users browsing the web at this very moment may be surprised to find that their online information is not secure. The Internet’s greatest strength in accessibility is its greatest weakness in security. Web servers have the ability to store information about every visitor. In order to provide services to their customers, websites must store information on their customer’s machines as well as on their databases. Collecting data is an essential function of web applications; unfortunately, the majority of data collection practices used today is unethical because their users are not informed properly of what, how, and why their information is being gathered.

Digital data is virtual; ironically, the information does not exist anywhere other than through bits and bytes stored electronically. Unlike a letters that exist on a physical sheet of paper, digital media can be transmitted, duplicated, or modified in microseconds. Online data has the same characteristics. One analogy to explain how the web works with data is by a boy and his father playing catch with a baseball. The child, the client, throws a baseball to the father, the web server. The father (web server) catches the baseball. On the baseball there is writing that the client child wrote that the server child can read. The server child then erases the writings on the baseball and responds to the client’s writings with its own and throws the ball back. Unfortunately, the boy is very young and illiterate and must have his mother (the web browser) interpret what the father wrote. For example, the boy plays catch with CNN.com and throws a baseball that says “Give me CNN.com’s homepage file” (which his mother wrote) to CNN.com. CNN.com reads the message and writes the HTML file on the baseball and throws it back to the boy. The boy catches the baseball and asks his mother to read it for him. The mother (web browser) checks the file to make sure it does not have any malicious “writings” (code) and then reads it to the boy. The mother can also remember data (cookies) for the boy that the Dad wants the boy to write down on the next baseball.

This analogy may seem odd; however, it is a way of understanding how the Internet works. The baseball represents the data being passed back and forth. Unfortunately, that baseball can be “intercepted” on its way from one direction to another. If the writings on the baseball are not encrypted then that person reading the baseball will have access to that data. Additionally, there is nothing stopping the father (the web server) from transmitting the boy’s data to some other person.

Web servers need to use the pieces of data stored by “cookies” to deliver products; however, the way that web servers use cookies pose significant ethical concerns. Cookies are a form of invisible data gathering; most users have no idea that cookies are being stored on their machine. The use of web browser cookies by websites is ethical and essential to the web. The problem cookies pose is the ability for them to be misused and abused. Most websites store enough information to isolate individuals. That cookie data has the potential to be compromised if it is stored on users’ PC in unencrypted form. Any user of that PC can read that data. Websites should be designed to let users know what and how that information is being stored as well as use encryption to protect data. There is a standard currently released on the web called a Privacy Policy document. The Privacy Policy is a detailed description of what information a website is collecting. Currently in the U.S.A., only websites that target or knowingly collect information from children under the age of 13 must have a Privacy Policy document posted on their website. This law, known as Children’s Online Privacy Protection Act (COPPA), requires users under the age of 13 to obtain parent approval before registering with a website. While this act is well-intended, most websites (especially small ones) do not have the resources to verify parent signatures.

There are legitimate counter arguments to enforcing Privacy Policies documents. The simple enforcement of ensuring what Privacy Policies documents say and what the website actually collects is nearly impossible with the vast amounts of websites in operation. A second problem is that the actual regulation is impossible as the government does not have the resources to verify that the web server does not or does store information listed on a privacy policy. Lastly and most importantly, very few users actually read privacy policies on websites. A study done at Carnegie Mellon University [1] finds that privacy policies in U.S. sites are on average 2,500 words and takes on average 10 minutes to read (thus costing billions of dollars per year in opportunity cost). The study concludes that because Privacy Policies documents take so long to read, and are difficult to understand, most Internet users ignore them.

While cookies are stored on my machine, data I enter on web forms are stored on web servers. Since Privacy Policy documents are so rarely posted, followed, or read, how can I be assured that my credit card information I entered in a web form to purchase a product won’t be kept by the webmaster? There is no way for me to know how long it’s stored and who has access to view it. Credit card and social security numbers are examples of sensitive information that a criminals, rather than companies, seek. Identity theft is so rampant [2] in the United States that 221 billion dollars is lost by business every year. Identity theft crime has hurt the online economy according to a survey done of online shoppers by Harris Interactive for Privacy & American Business and Deloitte & Touche LLP [3]. 64% of respondents from their survey have decided not to purchase a product from an online company because they weren’t sure how their information would be used.

Currently, solutions are being offered via the use of noteworthy and famous third party vendors such as PayPal; however, many websites choose to store credit card information themselves. Unfortunately, these sites are often unprotected from hackers and criminals seeking to steal the identity of one of their customers. An ethical solution is to have a government regulated list of authorized transaction vendors (like PayPal or Google Checkout) that online transactions must use. The use of any private system should be illegal unless it is on the government’s list of approved transaction middlemen.

While cookies are an important part of online privacy, a report [4] concerning privacy in the European Union mentions that protecting personal data from intrusion is not the only part of protecting privacy. Legaresi reports that “Personal data protection has absorbed most of regulatory efforts devoted to privacy, on the wrong assumption either that it coincides with privacy protection or that it has the same dignity of privacy protection. The misunderstanding of the concept of privacy has determined a devaluation of its value and a lower level of protections of some of its relevant sides, like solitude, anonymity, intimacy and personality [4].”

Legaresi is correct in his analysis of data protection versus visibility protection. Social networking websites are an example of where data could be digitally protected yet not private. Many users list their phone numbers and addresses on these websites which, unless privacy options are available and applied the social networking site, could be accessed by anyone on the social network. In the work environment, this fact is especially important. Many employees post pictures on social networking websites that may be seen as inappropriate by their employers. Tiffany Shepherd was fired from her job as a high school biology teacher after pictures of her in a bikini were found [5] on her social networking site.

I don’t think Tiffany should have been fired from her job as her pictures were not crude or in bad taste; however, I do respect the right of the school to fire a teacher they believe is poorly representing the school. A New England Patriots cheerleader was fired after she posted to Facebook.com a photo of herself at a party next to a passed out man covered in offensive markings [6]. In this example, I think that the Patriots have every right to fire her, as not only is she poorly representing the football organization, but they are a private company and should be able to fire anyone for any reason other than race, gender, religion, disability, or sexual orientation. There are arguments against firing employees without direct cause. Many believe that what they do outside of the work place is their business. Additionally, company rules are not always transparent to employees. However, private companies need this right to determine who can work in their company. For example, if a male employee had an affair with his boss’s wife, would the boss not be able to fire the male employee because the affair happened outside of work? Of course not! The boss, like all company bosses, should have the right to fire people for events happening outside of work. So referring back to the Tiffany Shepherd incident, she along with anyone else can control what their employers see by simply not posting controversial media on their profile pages.

Similar moral questions arise in public schools. Schools typically have web filters to prevent users from access certain websites. In many schools, every page a student visits, whether it is a ESPN, EBay, or Facebook, is immediately logged and reported to school administrators. While this oversight seems comparable to companies, I don’t think public schools share the same ethical standards. The difference is that employees today have the expectation of using some of their computer time for personal reasons since they often have a company email account and/or are on the computer all day. High-school students, who use computers sparingly during class for research purposes should not be using that time to send personal emails or to visit EBay.

In current practice, Social networking privacy is almost an oxymoron. On the one hand, social networking websites offer services to connect users together by sharing information. On the other hand, users prefer to restrict the sharing of information to certain parties. One solution that some social networking sites such as Facebook have implemented is privacy controls. Users (employees, students) can select which data is viewable to other users (i.e. employers, teachers). But where does the line between personal responsibility and privacy fall? Concessions need to be made on both sides. I need to realize that what I post on a social networking site is no longer private and social networking sites should, but not be obligated to, offer privacy controls. The reason sites social networking sites should not be obligated to provide privacy controls is because regulation is nearly impossible. Many argue the opposite, that social networking sites should be obligated to have visible, explicit, and easy to use privacy controls. However, the only way regulatory agencies would be able to know if users’ information is not being shared with unwanted users is by either approving website code or by monitoring user accounts. Either is made increasingly difficult as new versions of social networking sites are consistently released.

I think this problem is solving itself. Social networking sites compete for users; ones that offer more services such as privacy controls are more attractive to customers. While this capitalistic perspective may seem speculative, the online statistics website Alexa.com backs up this claim by ranking MySpace and Facebook, two social networks that offer privacy controls, as the most popular social networking sites in the United States.

Sharing personal data with third parties is a logistical privacy problem for these social networking websites. In order to show relevant advertisements to a specific user, websites analyze specific user information to show ads corresponding to their data. For example, if a user’s marital status is listed as “single” on Facebook, that user may see a web advertisement for a dating website. Or if one of the user’s favorite bands is Coldplay they might see a banner ad for a Coldplay concert. As long as these websites do not share identifiable information to the companies serving the ads and also notify the users that they are sharing his or her data with other companies, then their practice is ethical. A counter argument is that these sites should ask permission from a user. Some applications do request from the user permission to send information anonymously to a statistics service. However, requesting permission could hinder the experience of using their product. I personally think as long as a service is sending my information anonymously, the service is ethically OK. Whether or not regulation or enforcement of anonymity is possible is a different question.

Another ethical dilemma is where or not companies can sell user or users’ data to marketing companies. For instance, TV networks would love to know trends in what users are listing as their favorite TV shows. Facebook and MySpace can and do provide empirical data to companies. While many dissent this practice as their information is technically being distributed to a third party without their permission, I don’t find it morally wrong as long as the data being sent to companies is sufficiently large to support individual anonymity.

The Internet was built to help share information rather than hide it. Since websites require information to deliver information, they are ethically bound to inform their users in an explicit, non-confusing way exactly how information is being kept. There is no one solution to enforcing websites to uphold this moral standard. Protecting privacy online is a multi-faceted problem that involves both regulation and lasses-faire policies. Nevertheless, the best weapon against privacy threats is the realization of online privacy vulnerability.
Bibliography

1. N. Anderson, “Study: Reading online privacy policies could cost $365 billion a year,” 2008; http://arstechnica.com/news.ars/post/20081008-study-reading-online-privacy-policies-could-cost-365-billion-a-year.html.

2. “Identity Theft Statistics,” http://www.spamlaws.com/id-theft-statistics.html.

3. “Vague online privacy polices are harming e-commerce, new survey reports,” http://www.internetretailer.com/internet/marketing-conference/578566856-vague-online-privacy-policies-are-harming-e-commerce-new-survey-reports.html.

4. N. Lugaresi, “Principles and Regulations About Online Privacy: “Implementation Divide” and Misunderstandings in the European Union ” Book Principles and Regulations About Online Privacy: “Implementation Divide” and Misunderstandings in the European Union Series Principles and Regulations About Online Privacy: “Implementation Divide” and Misunderstandings in the European Union ed., Editor ed.^eds., 2002, pp.

5. “Tiffany Shepherd fired for wearing Bikini?,” 2008; http://www.newspostonline.com/world-news/tiffany-shepherd-fired-for-wearing-bikini-2008103111672.

6. “Patriots Cheerleader Fired over Facebook Swastika Photo,” 2008; http://www.foxnews.com/story/0,2933,448044,00.html.

Google Chrome Doesn’t Always Seperate Each Tab to Different Threads

I’m using Google Chrome right now, and I have to say that it’s blazing fast. Based on Javascript benchmarks I’ve ran from around the web, I found it to be way faster than IE7, way faster than Firefox 2, much faster than Firefox 3, and even faster than Opera. Chrome’s new Javascript engine is awesome, but I just wanted to clarify that tabs do not always run as a seperate process.

Google mentions on their in one of their Chrome FAQs that it’s up to the web developer to decide if a new link opens as a seperate process or not.

I found this out first hand by going to W3 School’s html samples page and clicking on one of the links. Instead of opening on the same tab, the link opens on a new tab. So now I had http://www.w3schools.com/HTML/tryit.asp?filename=tryhtml_basic open in a tab. I edited the html source code on the left frame and put in a simple infinite loop with Javascript then hit run. As expected, the tab, instead of the entire browser, hung.

However, when I clicked on the W3 School’s html sample tab (which launched the now frozen tab), I noticed it was also frozen. My other tabs, such as Gmail, Google Calendar, etc were fine.

Google answered why: 

New tabs spawned from a web page, however, are usually opened in the same process, so that the original page can access the new tab using JavaScript.

Makes sense. In fact, that is one of the reasons why current web browsers run in a single thread. No big deal. Most of the time I have multiple “sessions” of browsing and I don’t mind if a few tabs share the same process thread.

Anyway, if you don’t have Google Chrome, get it. Finally multi-core machines will see benefits in browsing the web. I can have 10 tabs open without seeing a slowdown.

My Thoughts on Google and Yahoo Indexing Flash Content

Today Google reported that they’ve developed an algorithm to index Flash content (only text, not video or images). This news is clearly dominating Flash news sources around the web, with mostly mixed reviews.

Many bloggers are criticizing Google claiming that it’s impossible for any algorithm to figure out the text information of SWF files which load text from external sources (such as XML) because it’s impossible to know the format of the XML documents being transferred over. But who cares? I still don’t get why people are saying that that’s how the algorithm works. Google has stated that it’s able to crawl externally loaded SWFs (although they don’t couple it with the original SWF when indexing, which is a significant problem for sites that load multiple SWFs for navigation); consequently, they must be monitoring HTTP requests made by the SWF and can do the same with XML files. Google doesn’t need to know how the XML file is parsed… the Flash document will do that for them. They can just have the Flash load the XML file and monitor the text fields and see the value. That’s probably why they say, “To protect your material from being crawled, convert your text to images when possible”.

The only problem I see is if text fields are very dynamic. Maybe the algorithm only goes through static text fields? Because I see no way how a text field that displays random letters (for visual effect) being able to be indexed by any algorithm.

Here’s my prediction: Community tagging. Just like the Google Image Labler game, Google will ask users to tag/label Flash documents that their parser can’t index correctly. Humans would be the perfect computational tool to solve this kind of problem. Yes, there are millions of SWFs out there needing to be indexed, but we really just want the major ones parsed.

Firefox 3 Release: World Record? What world record?

The Mozilla team has announced that they are attempting to set a (Guinness Book) of World Record of number of downloads for a piece of software in 24 hours… A record never set before.

The first thought that came to me was… huh? How can this number not be set already – by Adobe (or back in the day Macromedia) Flash! Isn’t the Flash plugin downloaded millions of times per day? Flash is on more computers than any other software (including any operating system). Macromedia never and Adobe doesn’t really release any download statistics other than averages. I’m sure Flash 8 or 9 when they first went public shattered records. Adobe should release the results so Mozilla has a number to beat.

Just think back to the day YouTube decided to switch to Flash 9 and how many users downloaded then.

Mozilla may set the record, but by no means will they have broken one.

*Edit:

I love FF3 (although I am shocked that they won’t fix a critical Flash bug that happens with RC1 for the final release). The search bar is great, and I don’t experience the slowdowns I did with FF2. But still, there seems to be a disconnect with FF and the web… Just look at this huge bugzilla thread about the Flash bug.

Astro (Flash Player 10) Beta Released!

Here are some highlighted features of the new player (a list by Adobe can be found on their labs page)

  • Adobe has finally made noise (read introduction material parts 1, 2, 3 by Adobe Engineer Tinic Uro). Keith Peters (who is one of the lucky few to have a version of Adobe’s upcoming authoring tool) has posted a sample application showing dynamic sound.
  • Native 3D effects! Interesting to see how Away3D and Papervision will react. While the native 3D addition will be great for vector graphics, Astro doesn’t support texture mapping or 3D model importing from 3rd party software while current open source projects like Away3D and Papervision do.
  • Multi-column layouts/tables for textfields.
  • Ability to change bitrates for streaming video on the fly.

Here are some demos by Adobe.

My reaction: After a survey showed some astronomical 98% of online videos use the Flash Player, Adobe seems like they are trying to cater to those needs. There are HUGE additions to video with Astro, and while I don’t see video portals such as YouTube or Google Video using the native 3D engine (unless it’s for some visualization), I do see web designers smiling as they can now deliever unique features for their clients. It’s only a matter of time before a 3D navigation using video is released by some design firm for a client.

There are also some… somewhat random… additions to the Flash Player. For example, inverse kinematic support with a new “Bones” tool. Is this feature really necessary? What was wrong with just using one of the many 2D physics engines?

Unfortunately, Adobe hasn’t released LiveDocs yet… But that will come soon. Understandable since much of the syntax is subject to change.

I wonder how long now before AS4 and the new ECMA features…

SmarterChild and ELIZA

This year, only three entries were submitted in the annual Loebner Prize Competition. The prestigious $100,000 dollar prize has yet to be won, and the competition is the only place where it has a chance to be awarded. If a team develops a chatterbot that can fool judges into thinking they are communicating with a human, that team will win the prize and the prestige. So why were only three entries submitted to this year’s 17-year-old event? Have developers given up? The answer is yes and no. While developers are still working to expand the intelligence of chatterbots, the focus is no longer on fooling humans. With the rapid advent in information technology, chatterbot developers are adjusting focus to keep up. The 1966 chatterbot ELIZA spawned this specific field of human-computer interaction; however, today’s chatterbots no longer have the same goals as ELIZA. For example, one of the latest chatterbots named SmarterChild has branched from ELIZA from the field of artificial intelligence to information technology. While ELIZA and the chatterbot SmarterChild are similar in that they mimic human behavior and response, they greatly differ in purpose and behavior.

ELIZA attempts to act as a Rogerian psychiatrist by continuously asking questions to its users. A user types in text as an input into the ELIZA program, and ELIZA returns a response that, ideally, is coherent and sensible to the user. The first testers of ELIZA sent messages via teletype to ELIZA not knowing that they were chatting with a robot instead of an actual psychiatrist. Shockingly enough, ELIZA was able to fool a great deal of its users into thinking they were communicating with an actual human psychiatrist. In fact, ELIZA was able to elicit deep, sensitive responses from its users. This artificial intelligence field has since lead to new, more sophisticated chatterbots with the same goal of fooling and gathering sincere human response.

Alan Turing had some assumptions about mankind. By now, he expected chatterbots to be able to fool most humans into thinking they were chatting with another human after a few minutes of conversation. Unfortunately, in terms of the Turing Test, we are not even close. Even the latest award winners of the Loebner competitions yield bizarre responses after one or two sentences. Does this mean chatterbots are lagging behind Turing’s vision?

Maybe bots have just taken a step sideways toward a different vision altogether. SmarterChild, a chatterbot developed within the last few years, is arguably one of the most sophisticated chatterbots available. Like ELIZA, SmarterChild can mimic human response. However, SmarterChild is highly developed in its responses in comparison to ELIZA. For example, a typical conversation opening with ELIZA may produce the following response.

You: How are you?
Eliza: Does that question interest you?

Compared to the more refined memory of SmarterChild:

You: How are you
SmarterChild: I’m doing great, how are you?
You: Ok
SmarterChild: OK? Glad to hear it!

Nevertheless, the reason SmarterChild is one of the most advanced chatterbots is not because it can impersonate human dialog extremely well. SmarterChild is advanced because it has a huge database of information resources, it remembers individual human response and bases it in future responses, and changes personality based on “peer pressure” with its audience. SmarterChild is able to return movie schedules, weather forecasts, news, opinion polls, random statistics, trivia, and tons of other types of information. The first pieces of information SmarterChild asks for from strangers are name, age range, and zip code. SmarterChild uses this data for local news information, weather, news, and more. Additionally, SmarterChild changes its own views and questions based on opinions inputted by its users. For example, if users tend to like Pizza, SmarterChild will change its favorite food.

In terms of credibility, SmarterChild does not offer an experience like ELIZA. Credibility implies believable emulation of a particular persona. The function of SmarterChild is to be a friendly encyclopedia rather than a credible, stereotypical personality. On the other hand, ELIZA is indeed a character. ELIZA is programmed to be someone and respond in a certain way to fulfill a specific character or stereotype to produce a believable experience. No matter what the user inputs, ELIZA will pose a question in the form of a Rogerian psychiatrist.

ELIZA creator Joseph Weizenbaum argues that no robot can ever replace humans because humans have the ability to comprehend or exhibit emotion. Yet, the implications of ELIZA have drifted away from emotion and feeling comprehension. Yes, users of Microsoft Office may possess negative feelings of hate toward the Paper Clip “help” agent; however, the Paper Clip is unable to respond to those emotions and can only react based on internal states and decisions made by the user. For example, the Paper Clip can be dragged around and silenced, but it will not sense that the user is upset when it repeatedly asks whether the user needs help writing a letter. ELIZA was built to challenge this idea of comprehending emotion, but it does it in a very superficial way.

You: I am sad.
Eliza: Do you think coming here will help you not to be sad?

Unlike ELIZA, SmarterChild is more of a tool than a character. Like a search engine, email client, or word processor, SmarterChild helps the user accomplish tasks. Yet, SmarterChild has a different approach to information technology than other Web 2.0 projects that attempt to do the same tasks. Contrast to websites, there are no “menus” to access information in a hypertext only environment. SmarterChild is a chatterbot. Because of this, SmarterChild can deliver information to its users faster than any website. The user types in the desired knowledge in the chat window as a sentence, and SmarterChild relies on its linguistic artificial intelligence to understand the context and what kind of response to deliver. Additionally, because SmarterChild is on an instant messaging medium, it can write hyperlinks to allow users to quickly access websites.

You: What movies are playing?
SmarterChild: Movies playing in or near Marietta, GA (30006) on Tuesday, October 23rd:

1 The Game Plan [PG]
2 30 Days of Night [R]
3 The Comebacks [PG13]
4 The Heartbreak Kid [R]
5 Michael Clayton [R]
6 We Own the Night [R]

Type more for the next set of results.

ELIZA has nothing to do with information technology; it was built to progress artificial intelligence and Alan Turing’s prophecy. But today, the goal is not to build a robot to fool a human. One key difference to SmarterChild and ELIZA is that SmarterChild is aware of its nonexistence.

You: Are you a robot?
Eliza: Would you prefer if I were not a robot?

Versus SmarterChild:

You: Are you a chatterbot?
SmarterChild: Yes indeed.
ScratchFromStart: Are you human?
SmarterChild: No I am not.

In this day and age of knowledge, the goal is to make a robot to deliver information in a relative, contextual manner.

The revolutionary affect of ELIZA was not the subsequent emergence of human computer interaction and innovations in chatterbots. The real influence came in information technology and discovering new ways to deliver information and knowledge to users in a friendly, humanistic style.