Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

Tuesday, May 04, 2010

Google Bibliography?

No, Google Bibliography is not a real product. I really wish it was though. I am currently starting to collect documents for my doctoral dissertation proposal and I keep running into the same issue over and over again, redundancy. I am absolutely fearful that my EndNote library is going to get squashed by any number of possible deaths. Call me paranoid, but when it comes to data, well...ok I guess I'm paranoid.
Large journal sources such as ACM already have a system for exporting to any number of citation storage packages, including the ever popular EndNote, here is the issue I have though, EndNote does a great job of keeping my references together, but does so in such an inelegant way. After 3 years of Gmail, 3 years of Google Docs, and access to a lot of these services on the fly via mobile, I came to rely on elegance of Google software, even more than that elegance, I rely on the cloud to store the most critical information as a backup device. The culmination of my academic career is more than "critical" to me.

What to do? I could just continue to use EndNote X3 which my university makes available to me for free. I then have the issue of storing my library, and all pdf articles associated with it in a central repository and "syncing" them. I use multiple computers for this process, so now I am almost tied to those little flash drives for my sync. Ug. I suppose I could "upload" my library files to Google Docs as a backup, but that again seems "inelegant". Why could I not have a solution where I can store, modify, read, relate, tag, and organize my citations in the cloud, as an integrated service with the apps I already rely on from Google?
What I want: I want a service which ties in to a document storage package like Google docs, can easily be updated like Google Bookmarks for Scholar searches, easily tagged (like all Google products). I want a Google citation database! In the cloud, massive storage, tags, easily searchable (search through the pdf uploads too), and linked to Google Talk for collaboration. I think this need fits right in the middle between Google Docs and Google Apps.

Please don't make me carry all my research and literature on a flash drive...please?


Wednesday, February 10, 2010

Google Buzz, bright idea but why?

Maybe I'm just not getting it. Maybe I never will. I read the information, watched the video, and have been "buzzing" with my friend TiVO25 for nearly 20 min now. What does Buzz accomplish?

Let me share with you a few things it does do well first. The application does share and start conversations well. I can follow the conversation with absolute ease and the user interface is easy and clean. Everything is within Gmail so I'm not really having to learn anything new. Very intuitive and fast. The response time is quick and I'm not left wondering it buzz is working. There are features which I would expect to see like email this and reply commenting as well. Yep, that's about it.

Now here are the things I do not like. As Buzz is rolled out to people, they are automatically added to my followers. I never manually added anyone to my followers or to be followed, it was automatic. By default all conversations are public and are stored on the web along with a profile page. Here is mine as an example (http://ping.fm/veXvi). I never asked for the profile page, and all my conversations in buzz to be auto added to a page, which can now be crawled and added to the search cache. Oops when I added the other sites to Buzz it auto posted my last tweet and blog entry, thanks for asking first! I auto spammed my gmail friends with materials. There is a link in my mailbox side nav for buzz, so why am I also getting it in my mail inbox? My blackberry is having seizures trying to keep up with Gmail mobile because I'm having a buzz conversation? No thanks.

My biggest concern is that this only takes information from other sites, it doesn't send it out. It would have been better if this was like ping.fm (where I am writing this now as we speak). I want ease of use. It is easy for me to login to mail and send an update to all my syndicated sites (facebook, twitter, blogger, etc). What does it accomplish to have my buzz updated from these sites, where my friends, family, and followers already exist? Who is reading my buzz then? Is it to try and convert more people to Gmail? Why would I do that? Email is a personal choice. Do I believe Gmail is better? Yes! Am I going to say that buzz is a reason to migrate over to gmail as a mail platform? No.

All in all buzz looks great, works fast, and does what exactly? My already used, flexible, and well established services are not enhanced by this product, neither am I for that matter.

Friday, December 05, 2008

Urchin Link Generator Missing

I noticed this morning that the urchin link generator available from Google is now replaced with a blank page. It makes me wonder what is going on? I cannot find any information about the change at Search Engine Land. I know that the older Urchin software was only going to be supported for a while, but I could swear that the tracking codes were going to have continued support as it was only the javascript which was changing (urchin.js vs ga.js).

I have posted a clean version of the tracking code generator at my site as a mirror. This is not intended to challenge the rights of Google or Urchin, it is simply there because I use the tool.

Perhaps it is just an outage or perhaps they are restructuring the urchin support tabs. The use of this tool reduces the amount of human error though and I would be really surprised at its removal.

Friday, May 30, 2008

New Google Fav Icon

This morning I logged into gmail only to notice a new favicon for Google. Instead of the Branded "G", it is now the branded little "g".

I cannot help but wonder if this is a temporary marketing ploy, a test, or a shift towards a newer , sleeker, more trendy Google.

Thursday, April 03, 2008

Google Toolbar 5 - No Firefox Support

I have always been a fan of Google's uniqueness. They astounded me in the late 90's with specific search engines for mac and linux. I found they were browser independent and really didn't cater to one operating system or another. I was happy then....

Later Firefox came out, and once again I could abandon the browsers default in the operating system, and run what I wanted, how I wanted, with whatever functionality I wanted. I was happy again....

I am not happy now....

I find it almost insulting that Google released what is probably the most flexible toolbar in the world, in a IE 6+ only format. Whiskey Tango Foxtrot? What happened to the independence Google is known for? Where is my ability to run Firefox 2-3 on Ubuntu and still have all the functionality I need? Why am I finding less and less buttons from the Google gallery for earlier versions of the toolbar?

I wouldn't mind so much if I didn't find my own buttons gradually being taken out of the gallery. I like gadgets too, I use them on my iGoogle page almost entirely, but I run Ubuntu at home and Mac OS X at my work. I am not about to run WINE or Crossover just to run a browser so I can have the newer version of the toolbar.

I guess this means I will have to go back and create buttons for my own needs, and host them, again.

Friday, December 14, 2007

Google Toolbar Button for Google Code

Boy is that title a mouthful or what....

I got tired of constantly having to open a new tab just to search Google Code. So I looked for a Google Toolbar button for Google Code. There was one available but the button would not send selected text, it was search only. If you know me you know I will not settle for the minimum.

This version of the button has search, send selected text to, as well as a feed based drop down list for the major areas of code.google.com.

Please enjoy, if you find it useful please pass on. Google takes a long time to add it to their library, so getting it off my site is currently the only way. I linked from Chriscopeland.com, but it is actually hosted on my other site, CLCResearch.com.

Thursday, December 13, 2007

Advanced Web Ranking

Advanced Web Ranking is a SEO tool devised specifically for SEO/SEM analysts and is available on Mac OS X, Linux, and Windows . It requires the JRE engine to work but it utilizes the local environment so that the user interface is very seem-less. I will be discussing the Mac version, although the topics will apply to all versions.

The primary reason I use this tool (which is still in demo mode) is for determining the placement of my keywords from Google search results. I also check the sites of all my competitors as well as the sites of those I think will rank highly in Google based on relevancy. Since it tracks the information over time, I can see how I am doing and how the changes I am making to the sites are reflected (if at all) in Google SERPs. There are some serious things I like about this software and I will go through them first.

1. Import initial lists of keywords based on lists - The software will take a list of keywords during project creation. This saves a load of time. I keep all my keywords in a text file anyway as I am constantly monitoring and changing the words. Con - I must change the keyword in the application, which for me means that I must update the master list (regardless of if it is a text file or a db) as well as the application.

2. The application will import a list of websites to check the search engine for regarding site placement. - This is also a great tool. See #1 above for pros and cons.

3. AWR can update multiple projects at a time - Either through a cron (which if you know me you know I love to put things under Cron's control) or using it's manual process. The only downside to using the manual process is the time it takes. Utilizing a cron job (which the built in scheduler interfaces with quite easily) is more simplistic and can be done at intervals to maximize your CPU time (weekends, nights, etc)

4. Output formats are universal for reporting
- this includes CSV, Excel CSV, and XML. I utilize the CSV output for easy upload into a SQL DB. Once a week my reports are generated after an update, and uploaded into a DB for easy lookup and for data redundancy efforts. This is great because I do not have to rely on a localized db (like FileMaker or mySQL) running on my machine. The DB I upload to is tape archived and on a very good UPS. The data is in two locations and easily accessible for multiple persons.

5. The contextual menus for keywords - will allow you to jump right to the landing page for the site and keyword or jump to the SERP as it is at the current moment in time, which can tell you quickly if your results have changed.

Some of the larger cons for AWR are mainly my own issues. I would like to see a toggle in the keyword listing for showing just the keyword selected in the primary list. This would save me from having to do a regular expression search using the built in search bar for keywords. I would also like to see the application track (again a radio button or a check box) that the word is a paid term or paid keyword. This way I could tell quickly to stop paying for a #1 or #2 keyword.

All in all this is a good application. It is not what I am currently seeking in the entirety, which would be a keyword management tool which could auto import from multiple locations, but this will do in getting me relevant data to analyze in my SEO roles.

- Chris Copeland

Monday, October 22, 2007

How To: The Urchin Data Extractor part 2

Well I said I would publish my script on Friday. Sorry about that, my wife and I moved into a new house over the weekend and I guess I was just a little over zealous.

Anyway. The script. In the past post I showed you how to (basically) setup the urchin script and make the command line calls to it. In this part, I will show you what I did to automate the process for ytd 2007, including all the months up to the current month. I utilized the bash quite a bit here. I'm sure there is a faster way to automate this without using as many tmp files, but I like to keep my data in stages.

Here is the script:
#!/bin/bash
cd /Users/ChrisCopeland/Apps/scripts/urchin
currentMonth=`date '+%m'`
currentdate=`date '+%Y%m%d'`
currentYear=`date '+%Y'`
i=1
let loopVar=currentMonth+1
echo "Monthly Reports Available :"

while [ $i -lt $loopVar ]; do
echo "$i"
let i=i+1
done
read -p "please select the month or enter ytd for year to date: " -e input
case "$input" in
'ytd')
perl u5data_extractor.pl --begin 20070101 --end $currentdate --max 21000 --report 1201 >>fileTMP;
;;
'1')
perl u5data_extractor.pl --begin 20070101 --end 20070130 --max 21000 --report 1201 >>fileTMP;
;;
'2')
perl u5data_extractor.pl --begin 20070201 --end 20070230 --max 21000 --report 1201 >>fileTMP;
;;
'3')
perl u5data_extractor.pl --begin 20070301 --end 20070330 --max 21000 --report 1201 >>fileTMP;
;;
'4')
perl u5data_extractor.pl --begin 20070401 --end 20070430 --max 21000 --report 1201 >>fileTMP;
;;
'5')
perl u5data_extractor.pl --begin 20070501 --end 20070530 --max 21000 --report 1201 >>fileTMP;
;;
'6')
perl u5data_extractor.pl --begin 20070601 --end 20070630 --max 21000 --report 1201 >>fileTMP;
;;
'7')
perl u5data_extractor.pl --begin 20070701 --end 20070730 --max 21000 --report 1201 >>fileTMP;
;;
'8')
perl u5data_extractor.pl --begin 20070801 --end 20070830 --max 21000 --report 1201 >>fileTMP;
;;
'9')
perl u5data_extractor.pl --begin 20070901 --end 20070930 --max 21000 --report 1201 >>fileTMP;
;;
'10')
perl u5data_extractor.pl --begin 20071001 --end 20071030 --max 21000 --report 1201 >>fileTMP;
;;
'11')
perl u5data_extractor.pl --begin 20071101 --end 20071130 --max 21000 --report 1201 >>fileTMP;
;;
'12')
perl u5data_extractor.pl --begin 20071201 --end 20071230 --max 21000 --report 1201 >>fileTMP;
;;
esac

#cleans the slash for easier editing
tr "/" "_" <>fileTMP2
#start of line removal
sed '
s/_index.cfm//
s/_&safe=vss//
s/_&adlt=strict//
s/.cfm//
s/,//
s/\ $//' <>fileTMP3
cat cleanThese | while read line; do
sed -ie "/$line/d" fileTMP3
done
less supplierTMP3 | cut -c 2-500 | grep -v "^ " | grep -v "^_" | grep -v "^#" >>fileTMPws
tr -s " " fileTMPcond
sed 's/ /,/' /Users/ChrisCopeland/Sites/urchinreports/file$currentYear-Month$input.csv
#cleans temp directory
rm fileTMP*
echo "your report is complete"

Now we can go through the script. The first part should be self explanatory, I am setting the values for the current date, month, and year (year is not yet implemented in my script, but will be soon). Then I ask the user for which report (monthly) they would like to generate, and show them a numerical list of the reports available (from 1 to current month). It will be in this case function that I will implement the current year, so that a user can get the month and year data needed. At the moment we are only interested in the year 2007.

Next comes the hairy part - the cleaning of useless data. I have a file for the most common items I want filtered. The file is called cleanThese (simple name). Before I open that file though, I want to clean certain characters and items which get skipped over do to the fact that this log file which is generated, will have a list of urls and paths in it. Paths and urls have weird characters sometimes, like ";" ":" "/" etc. Try passing these into a command line sometime and you will see how troublesome they can be. So let's get them out of there.

tr "/" "_" - replaced "/" with an underscore, which will make it easier to clean the rest of the log.

Now we throw this whole thing to sed - a great program.
sed '
s/_index.cfm//
s/_&safe=vss//
s/_&adlt=strict//
s/.cfm//
s/,//
s/\ $//' <>fileTMP3

we have a lot of oler cold fusion files, and some items that have a problem in the clean file, this sed command, which is a chained command, one per line, cleans these things out and leave nothing in the pattern's place. You can see now why I cleaned out the extra "/", I would have been passing a /// to sed, which it doesn't understand.

Now the cleanThese file:
cat cleanThese | while read line; do
sed -ie "/$line/d" fileTMP3
done
This reads each line of the cleanThese file (which I can modify at my desire) and replaces the entire line of the patten with nothing, effectively removing the line.

Then I want to clean up the formatting from the output of the cleaning:
less fileTMP3 | cut -c 2-500 | grep -v "^ " | grep -v "^_" | grep -v "^#" >>fileTMPws

This line will cut certain characters out (based on the original output), pass these to grep with an inverted search 3 times looking for different patterns, then writes that out to yet another tmp file.

The next line:
tr -s " " fileTMPcond
compresses all the space characters to one, outputs that to another file, which sed will take in, and replace the now single space, with a "," - effectively making this a csv file, which is names with the current month and set to a directory.

The next couple of lines clean the directory of tmp files and report to the user in the shell that the report is ready.

This will leave you a clean, importable, ready to query csv file just aching to be imported into a SQL engine of some sort.

Again if you want to run a different type of report, or use the case statement to generate a set of reports, you can visit my website to find a list of urchin reports available.

I want to find a good way to make this script available in a web interface at some point. I would also like to give the user a list of reports and years at the front of the application, just to help automate the process further.

Please Enjoy!

Thursday, October 18, 2007

How To: The Urchin Data Extractor (u5data_extractor)

You can get the perl scripts for customizing Urchin data at the Google Urchin Support Page. I read the little documentation on this subject, which is a basic how to, without much resource. Urchin support firms charge something serious to get this kind of thing done, and here I am being a nice guy, giving away what I learned FOR FREE.

So let's begin with the lessons I learned.

1. Use some form of linux/unix. I could not, for the life of me, get any of these scripts to work with Windows and I think this is because of the path. The perl script is looking for a unix like path. I'm sure there are those people out there, smarter than I, who can get this to work on a windows server, but I am not one of them. The examples I give will be run from a Macintosh running OS X 10.4.10, ActiveState Perl, and the bash. In addition I would like to thank the wonderful folks (yet again) over at macosxhints forums as well as unix.com forums for helping me get my syntax correct in my scripts.

2. Use a step by step process.

3. Verify your data, and backup! The last thing you want to do is run an untested and "use at your own risk" script on your Urchin reports.

4. Do not always believe the available documentation.

5. When report testing, use small segments of data for your report. It saves time and you get to test your text scrubber faster.

Ok - now let's get to the logical process. What I wanted to do was to pull certain reports from Urchin and post them to a database, preferably some flavor of SQL.

The process will look something like this.
1. run perl script with start date, end date, report type, and number of items returned.
2. save report as a text file
3. scrub text file for bad characters, bad lines, and data which is not applicable.
4. comma delimit the file
5. hand csv file to sql import engine.

sounds easy right? It is for the most part.

The u5data_extractor script will do a lot of this work for you. This is the usage section of the script, which will also show up in the command line if you call the script with ~$ perl u5data_extractor. I removed the copyright and some other text for the purpose of posting to the blog.
###########################################################
# Usage: u5data_extractor.pl [--begin YYYYMMDD] [--end YYYYMMDD] [--help]
# [--language LA] [--max N] [--profile PROFILE]
# [--report RRRR] [--urchinpath PATH]
#
# Where:
# '--begin YYYYMMDD' specifies the starting date (default: one week ago)
# '--end YYYYMMDD' specifies the ending date (default: yesterday)
# '--help' displays this message
# '--language LA' specifies the language for the report. Available
# languages are: ch, en, fr, ge, it, ja, ko, po, sp, and sw
# '--max N' is the maximum number of entries printed in the top 10 report
# types (default is 10).
# '--profile PROFILE' specifies the profile to retrieve data from. The
# default is specified at the beginning of this script
# '--report RRRR is the 4-digit number for the report (default is 1102)
# Run this script with --help to see a list of available reports
# '--urchinpath PATH' specifies the path to the Urchin distribution.
# Note that you can edit the script and set your path as a default
###################################################

Giving the script your default path:
You will need to give the script the path to the Urchin Directory.
this is the line for my machine (following a unix path):
my $urchinpath = "/usr/local/urchin"; # Path to the Urchin distribution

Give the script your default profile:
You will need to give the script the default profile.
This is the line for a made up profile in the script.
my $profile = "My Default Profile"; # Name of the default profile
This is important - you do not have to use %20 to represent spaces if you are using the quotes. Urchin, by default, stores the profile directories with %20 for whitespace characters.

The report number is a difficult thing. Where do you find those reports? I found an article, somewhere, which shows the report numbers. Have no fear, I made a list for you of the urchin report numbers.

I will give an example, since none was really given for me. Let's say I want to run a report from Jan 01, 2007 to Jan 27, 2007 for the report "Visitors & Sessions"
so when you call the script, you will be using the following syntax:
perl u5data_extractor --begin 20070101 --end 20071027 --report 1903 --max 10

this will generate the output to the standard out (screen), which I will not post due to privacy reasons.

If you want to redirect the output feel free to do so
perl u5data_extractor --begin 20070101 --end 20071027 --report 1903 --max 10>>output.file

Tomorrow I will post my scrubbing process as well as the script I used to call backup the data and generate the reports.

Enjoy!

Thursday, September 20, 2007

Blocking your Competition in AdWords

I decided to go ahead and block my competitors from viewing my ads. First off, what does this really mean. The nuts and bolts are that anytime a person types in a word into the Google Search engine, they could possibly see an advertisement for your company, if you have purchased that phrase or term (keyword). If I wanted to be mean, I would click on the ads of my competitors, this is known as a form of "click fraud". I am not suggesting that you go out and start racking up the clicks, in-fact it would be harmful, Google and others have very good systems in place to catch it

Which doesn't stop the "occasional" click from your competitors.

Google does a pretty good job offering a tool in the AdWords application. If you navigate to your AdWords account you can select the "Tools", from there select the "IP Exclusion". Google will allow you to block up to 20 IP addresses, this includes wildcard ranges. So getting started, let us say that Yahoo! is my competition (it's not).

The first thing I would do is to find the IP range of the PICs, the people in charge. Knowing what I know about Yahoo!, they have a Mountainview/San Jose office and (I think it's still there) an Office somewhere in Dallas. Let's find out....this is where you need to know how to use the old whois tool. Whois is a query tool telling you who an IP has been registered to. The main database you want to query is ARIN, the American Registry for Internet Numbers, they will more than likely tell you who has what IP. So how do you find out the IP if you don't know it? Well you will need to do an IP lookup, this can be done sometimes by a straight ping, traceroute, or nslookup.

Now do not just go pinging away at the web address, that may not get you what you need to know! I almost never use the www.foobar.com web address simply because it doesn't always mean what you think it means.

Webservers are not always located at the corporate offices, where the marketing department is probably located! I tend to look an office by mail server, which is also not always at the corporate office, but is more often than not. So lets find Yahoo's corporate office by IP address (if we can). mail.yahoo.com shows up with an IP of 209.191.92.114. Now all I need to do is find that IP in the world. SEOMOZ has a pretty little AJAX tool which can tell us where the IP is geographically located (most of the time).

209.191.92.114 shows up in San Jose, off highway 82. This sounds right, now let's do a whois on that IP. The address comes up in Sunnyvale, not San Jose, but for my purposes it's close enough. Now lets look at the whois:

Search results for: 209.191.92.114
OrgName: Yahoo!
OrgID:      YAOO
Address: 701 First Ave
City: Sunnyvale
StateProv: CA
PostalCode: 94089
Country: US

NetRange: 209.191.64.0 - 209.191.127.255
CIDR: 209.191.64.0/18
NetName: A-YAHOO-US3
NetHandle: NET-209-191-64-0-1
Parent: NET-209-0-0-0-0
NetType: Direct Allocation
NameServer: NS1.YAHOO.COM
NameServer: NS2.YAHOO.COM
NameServer: NS3.YAHOO.COM
NameServer: NS4.YAHOO.COM
NameServer: NS5.YAHOO.COM
Comment:
RegDate: 2005-05-20
Updated: 2005-07-21
RAbuseHandle: NETWO857-ARIN
RAbuseName: Network Abuse
RAbusePhone: +1-408-349-3300
RAbuseEmail: network-abuse@cc.yahoo-inc.com
OrgAbuseHandle: NETWO857-ARIN
OrgAbuseName: Network Abuse
OrgAbusePhone: +1-408-349-3300
OrgAbuseEmail: network-abuse@cc.yahoo-inc.com
OrgTechHandle: NA258-ARIN
OrgTechName: Netblock Admin
OrgTechPhone: +1-408-349-3300
OrgTechEmail: netblockadmin@yahoo-inc.com
# ARIN WHOIS database, last updated 2007-09-20 19:10
The part we are interested in for blocking is:
NetRange:   209.191.64.0 - 209.191.127.255 
which is ALOT of addresses, but you could enter up to 20 of these ranges in AdWords by entering them in the AdWords list as:
209.191.64.*
209.191.65.*
etc

That should do it....enjoy

Friday, September 14, 2007

More Fun with Lynx

I grew up using gopher servers before there was a www or http, so when the real "web" came along it was needless to say awesome. One of the first web browsers I used was Lynx.

Lynx is a very very simple browser, very useful in scripts and for checking to see how a search engine views the webpage. If lynx cannot see your content, it is very doubtful that a seach-bot will see it too.

So the last post shows how to use lynx to call Google's caching times. This will show you how you can automate lynx to do automatic retrieval of web information for you.

Here is a simple script which will read a file in line by line and pass the information off to lynx for a Google search.

#!/bin/bash
cat ${1} | while read mySearchTerm; do
lynx -source -accept_all_cookies "http://www.google.com/search?q=$mySearchTerm"
done

This script will throw everything to the standard out. What I do is pass this information on to a text file or to grep for counting purposes.

#!/bin/bash
cat ${1} | while read mySearchTerm; do
lynx -source -accept_all_cookies "http://www.google.com/search?q=$mySearchTerm" |grep -c 'pattern.to.count'>> /path/to/text/file.txt
done

and now we have auto document retreival from Google. A word of warning, because this will take whatever is in the line, you must be careful of non-alpha numeric characters like !@#$%^&*-\/, as these will be passed on to Google too, which can alter the search results. You can also use things like the 'date' command or other small *nix programs to alter the url fed to lynx. If you want to time this sort of script you can always use crontab functionality found in unix, linux, os x. Be sure to read up on the man page for lynx.

Enjoy.

Wednesday, September 12, 2007

Quick Check of Google Crawl

If you are not using Google's Webmaster tools this is a quick BASH script which can check the spider rate.

type cache:your.website.here in a google search
note the return URL in the browser - save this (must have the IP)

#!/bin/bash
set -o errexit
stamp=`date`
touch temp.txt
lynx -dump -accept_all_cookies "cached.url.here" | grep 'retrieved' | cut -c 4-50>>temp.txt
cache=`catrm -rf temp.txt
echo $stamp Google $cache>>/path/to/desired/dir/file.txt

Check out the documentation on the cut command which gets the info from grep, this truncates the number of characters passed to the temp.txt, adjust to what you need to get the desired result.

this should give you a return result like this:
Fri Aug 31 14:18:05 CDT 2007 Google retrieved on Aug 30, 2007 13:49:11 GMT.
Tue Sep 4 09:10:20 CDT 2007 Google retrieved on Aug 31, 2007 14:35:14 GMT.
Wed Sep 5 07:51:55 CDT 2007 Google retrieved on Sep 2, 2007 15:52:02 GMT.
Thu Sep 6 13:01:19 CDT 2007 Google retrieved on Sep 4, 2007 22:35:39 GMT.
Fri Sep 7 07:00:00 CDT 2007 Google retrieved on Sep 5, 2007 13:25:22 GMT.
Sat Sep 8 07:00:00 CDT 2007 Google retrieved on Sep 6, 2007 13:28:59 GMT.
Sun Sep 9 07:00:00 CDT 2007 Google retrieved on Sep 8, 2007 08:19:05 GMT.
Mon Sep 10 07:00:00 CDT 2007 Google retrieved on Sep 8, 2007 08:19:05 GMT.
Tue Sep 11 07:00:00 CDT 2007 Google retrieved on Sep 10, 2007 08:54:21 GMT.
Wed Sep 12 07:00:00 CDT 2007 Google retrieved on Sep 10, 2007 23:52:44 GMT.

a quick a dirty log of when Google Crawled my site, I then just threw this to crontab to run every morning at 4am, and my browser is set to open this link upon activation.

Enjoy.

Friday, September 07, 2007

Guild Wars Wiki Joins Google Toolbar

I decided that it was time to add y.a. button to my toolbar, this time to supplement the vast amount of time I waste playing my only online RPG, Guild Wars. I just have many more things to look up now that GWEN has shipped.

This particular toolbar button has several features, instead of navigating within the official wiki, it has the most common links built in, skills, elite skills, missions, quests, and maps. This button also can utilize the Google Toolbar for searching as the search box feeds directly to the wiki search engine. You can also highlight text and pass it to the wiki search engine as well.

I hope this gets you lots of drops! Tested in FF 2.0/IE6&7 on OS X, XP.

Guild Wars Toolbar Button

Keyword Change Logs

This is my first official (but certainly not the last) gripe about Google AdWords. We all know that Google has done a lot (just look at my previous post), but tracking keywords to me is very similar to project management or software development.

It needs a CVS! Please!

The fact that the software doesn't have a way to log changes means to me is that I must have a great memory and I must be constantly sending emails to coworkers about changes. Why is this so important you say? Imagine this if you will. I am the only SEO/SEM at my office. Now imagine that I perhaps DO keep track of my changes to the keyword/ppc campaigns in Google on little stickies of paper all over my desk. Now imagine if you will that I have been doing this for years. Today I get hit by a bus, and all that institutional knowledge is lost to the patterns of the universe never to be seen again. This should make you shudder (if you are not thinking of personnel loss in your disaster recovery plan you really need to address the issue).

Have a CVS, even if it is a notepad like application. Now what would be really nice is to be able to track changes to keywords, ads, campaigns, bids, etc in the Google applications. The stand alone client allows you to batch and to peg a note, but no other client can see those notes, they are saved (somewhere) on the local device. Come on! A SQL table is not that hard to add to the AdWords package. Instead of integrating a group-ware check-in/check-out system, I am left to create one.

Grrrr...and don't tell me it will be in Urchin 6, I hear that from too many people.

Ok - this is me stepping of my soap box.

Wednesday, September 05, 2007

Google Toolbar AdWords Button

I wrote this to end some serious frustration in searching my employer's ad campaigns. It might be helpful to anyone who uses Google AdWords. This is a quick and dirty Google button for sending search data to the built in search engine and sending highlighted text in the browser to the same engine. Also this will allow quick navigation to the in menu tools which are usually several clicks to get to in the menu options in the AdWords/Analytics menu bar.

You can get the software installed by going to the main page and clicking under the links for the Google Buttons. This will require the latest version (4 I think) of the Google Toolbar

or you can get my Google AdWords Button Here

My Increasing Transition Away From Yahoo!

I would first like to say that I have been a Yahoo! user for over a decade. In terms of the internet that is an eternity. I started using yahoo search when the only browser truly available was Lynx (which I still use from time to time).

I attended the SES conference and expo in San Jose this weekend, and aside from not visiting a friend at TiVO, I had a great time, learned a lot and witnessed the ultimate corporate party.....The Google Dance 2007....

Without going into too much detail (Kimber I want my photo please), I learned about what type of sheer geniuses Google tends to hire. I consider myself pretty bright. I went to college at 15 for engineering (I went back to high school after learning that college wasn't for me yet), I have completed a BA and a MA and even managed to get published. I will get a PhD at some point as well. None of this compares with the outside of the box thinking and mentality of the standard employee of Google. After being really impressed with some of the things which Google has been spending time on lately (like the 700 MHz auction), I am more inclined to check out the newer technologies coming down the pipe from Google. This will lead us to Google Labs.

If you haven't been to Google Labs recently, take a peek over there. Check out the new ideas in search engine results. More over check out the Firefox extensions. If you add the kind of functionality of Firefox in general, the Google Toolbar, and the Google Toolbar API for custom buttons, I find myself needing the Yahoo! services less and less. Last night I exported all of my bookmarks, which I have collected over several years, out of Yahoo! and into Google. It was seamless and painless. With the addition of services like Plaxo (despite what ever controversy there may be), I am finding my internet life more and more integrated with my everyday needs.

I will be the first to say perhaps Google is in fact the new Borg, but unlike it's predecessor, it actually takes into account what I want and what I might need, instead of forcing something down my throat. I can accept a certain level of dissatisfaction if my needs are being met, as of yet though, I am not dissatisfied with the general nature of Google's Services (including Analytics) or their mentality towards their users, and my needs are being met and perhaps even predicted.