Tag Archives: google

Digital Marketing: What I read & pay attention to

(Last updated: 11-May-2011)

I recently completed a job application that wanted to know what websites I read & pay attention to. Here are some of the websites & web pages I provided:

John Battelle’s Searchblog – The man who wrote the book on Search. His blog on the online industry is continually sharp & thought-provoking. Battelle writes really well. I don’t always agree with what he might be saying but that is a healthy thing.

Wired – Do you ever wish there was a daily newspaper just about tech? This is probably the closest thing to it.

Search Engine Land – It is kind of an unfortunate name. But they have a number of good search marketing writers & Danny Sullivan knows search better than anyone I know of.

Google blogs – There are a lot of these so I’m just going to list them:

Conversion Rate Experts – the leaders in conversion rate optimisation. Particularly worth paying attention to for their case studies which give useful insight into their conversion optimisation process.

SEOmoz Daily SEO Blog – While some of what SEOmoz says publicly is often couched in caveats e.g. “this may mean”, “this might suggest”, the need for this is partly driven by their highly visible position in the SEO industry & the trouble with stating absolutes. Their blog is essential reading for SEO news & tactics.

NYTimes – US-centric news but it is better than, say, Fox. Their Magazine section occasionally does great long form pieces.  The Critics Best Of videos are good too.

Articles

Video

UPDATED:
Occam’s Razor by Avinash Kaushik – Avinash is a Google Analytics Evangelist but he also seems to do things like consulting/speaking on web analytics/writing books on web analytics. He is a guru on web analytics & his blog posts over the years have been critical to educating me about Google Analytics, metrics to ignore & metrics to pay attention to.

Advertisements

Kiwi redux: Google Trends for Websites

In June 2008, roughly around the time when Google made Trends for Websites available, I wrote a post about some sites & their trends. Mainly just trends I found interesting. It’s now June 2009 so I thought it would be fun to revisit the same graphs.

This time I’ve limited the graphs to ‘last 12 months’ as opposed to all-time.

ALL GLOBAL TRAFFIC: The rise & rise of Facebook
I’ve put Twitter in there just for fun. I wonder though if Google can’t see all the Twitter ‘views’ that occur on feed readers, iPhones, smartphones, etc.
redux-myspace

NZ TRAFFIC ONLY: Bebo hangs in there
I am actually surprised Bebo isn’t showing more of a decline. I suppose it’s that tween/teen demographic I kind of don’t really care about (sorry!)
redux-myspace-newzealand

It’s tight in E-Commerce
I get to answer my question! We might assume Ferrit’s sale didn’t help because Ferrit is now gone. I think their press release at the time was a combination of “this idea was ahead of it’s time” & “we’ve moved online business in New Zealand forward with Ferrit”. It just sounded hollow. What is that analogous to? The Titanic? The Hindenburg? Thanks for coming Ferrit.
redux-ferrit

What is that thing they say about slow-moving giants?
I actually find this the most interesting graph. Compare it with a year ago. For TradeMe, let’s call that roughly a 15% decrease in traffic on the average in 2008. Still, I don’t think anyone would be predicting their traffic to drop off over the next 12 months.
redux-trademe

The irrelevance of Slashdot
As noted by John Gruber. People are obviously still visiting Slashdot but it is a shadow of its former self. I think it needs editors choosing & writing up its news rather than republishing user submissions.
redux-slashdot

Kiwi remix: Google Trends for Websites

Google Trends for Websites is an approximation of the amount of traffic Google thinks your site is getting. So while not 100% accurate, it provides some interesting insights.

The rise of Facebook
myspace

But for NZ traffic only, it’s all about Bebo
myspace_nz

Ferrit’s sale is seemingly helping their traffic (but is it helping their bottom line?)
ferrit

TradeMe is bigger than the internet
This is TradeMe compared with some signicant international sites. And it dwarfs them.
trademe

The fall of Slashdot
slashdot

Goooooogle

Anil Dash:

Google’s announcement of Knol shows that they understand some of their key business drivers very well; With as much as 5% of the search result links for popular terms going to Wikipedia pages, a solution to capturing some of that traffic in an environment that Google can control and display ads on makes good business sense. The idea of sharing the earnings from that content with authors is also good business sense. But as with Google Pages (Page Creator), Blogger, Google Notebook, JotSpot, Google Docs/Writely and other tools, Google has not proven that it understands content creation and publishing as well as it understands its core businesses of search and advertising, or even its ancillary tools for communication and collaboration.

Worse, Knol shares with Google Book Search the problem of being both indexed by Google and hosted by Google. This presents inherent conflicts in the ranking of content, as well as disincentives for content creators to control the environment in which their content is published. This necessarily disadvantages competing search engines, but more importantly eliminates the ability for content creators to innovate in the area of content presentation or enhancement. Anything that is written in Knol cannot be presented any better than the best thing in Knol.

Danah Boyd:

…given that page rank algorithms are proprietary, I can’t wait to see what happens when Knol articles are “magically” higher in rank than the About and Wikipedia equivalents.

The anatomy of Google

Only nine years late, via Speaking Freely, I am reading the paper ‘The Anatomy of a Large-Scale Hypertextual Web Search Engine‘ (a.k.a Google) by Sergey Brin and Larry Page.

I liked this bit about the Google crawler interrupting an online game:

It turns out that running a crawler which connects to more than half a million servers, and generates tens of millions of log entries generates a fair amount of email and phone calls. Because of the vast number of people coming on line, there are always those who do not know what a crawler is, because this is the first one they have seen. Almost daily, we receive an email something like, “Wow, you looked at a lot of pages from my web site. How did you like it?” There are also some people who do not know about the robots exclusion protocol, and think their page should be protected from indexing by a statement like, “This page is copyrighted and should not be indexed”, which needless to say is difficult for web crawlers to understand. Also, because of the huge amount of data involved, unexpected things will happen. For example, our system tried to crawl an online game. This resulted in lots of garbage messages in the middle of their game! It turns out this was an easy problem to fix. But this problem had not come up until we had downloaded tens of millions of pages. Because of the immense variation in web pages and servers, it is virtually impossible to test a crawler without running it on large part of the Internet. Invariably, there are hundreds of obscure problems which may only occur on one page out of the whole web and cause the crawler to crash, or worse, cause unpredictable or incorrect behavior. Systems which access large parts of the Internet need to be designed to be very robust and carefully tested. Since large complex systems such as crawlers will invariably cause problems, there needs to be significant resources devoted to reading the email and solving these problems as they come up.

Source: ‘The Anatomy of a Large-Scale Hypertextual Web Search Engine‘ Brin/Page, p. 10

It is also interesting to note the beginnings of Google Book Search in the acknowledgements:

The research described here was conducted as part of the Stanford Integrated Digital Library Project, supported by the National Science Foundation under Cooperative Agreement IRI-9411306. Funding for this cooperative agreement is also provided by DARPA and NASA, and by Interval Research, and the industrial partners of the Stanford Digital Libraries Project.

Source: ‘The Anatomy of a Large-Scale Hypertextual Web Search Engine‘ Brin/Page, p. 16

Note also their thoughts on the relationship of search engines and advertising:

Currently, the predominant business model for commercial search engines is advertising. The goals of the advertising business model do not always correspond to providing quality search to users. For example, in our prototype search engine one of the top results for cellular phone is “The Effect of Cellular Phone Use Upon Driver Attention”, a study which explains in great detail the distractions and risk associated with conversing on a cell phone while driving. This search result came up first because of its high importance as judged by the PageRank algorithm, an approximation of citation importance on the web [Page, 98]. It is clear that a search engine which was taking money for showing cellular phone ads would have difficulty justifying the page that our system returned to its paying advertisers. For this type of reason and historical experience with other media [Bagdikian 83], we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.

Source: ‘The Anatomy of a Large-Scale Hypertextual Web Search Engine‘ Brin/Page, p. 18