This is my first python script. Many thanks to the folks at python-forums.org for the help! There is nothing really special, I just wanted to do more than a hello world.
This was made using Active Python 2.5 for OS X and using both emacs, and Smultron for editing.
#!/usr/local/bin/python
import re
import string
import urllib
url = 'http://www.dogpile.com/info.dogpl/search/web/'
print "Enter your search term \n"
query = raw_input()
#print headers
url = url+query
response = urllib.urlretrieve(url, 'response.html')
print 'Your HTML SERP was saved as response.html'
print "press enter to quit"
raw_input()
which gives you command line output like this:
:~ ChrisCopeland$ ./blog.py
Enter your search term
blogger.com
Your HTML SERP was saved as response.html
press enter to quit
and my html document is a raw dump of the SERP from dogpile.
I would also like to warn anyone, a call like this to some search engines (especially ones that start with a capital G), consider this to be a violation of the terms of service, so just be careful.
Enjoy
Wednesday, September 26, 2007
Saturday, September 22, 2007
phpMyAdmin
If you are doing any type of web development and you are planning on using a mysql table, do yourself a real favor and install phpMyAdmin. I have been using this solution for a few years now (since the move from postgreSQL to MySQL) and I gotta say it is a really nice tool to have.
There are several versions available, the OS X version comes in a nice mpkg installer as well as a easy to start script which will generate your blowfish key and get you going (that was a nice feature add on). I keep browzar (windows) or safari open to manage the tables and databases while I am working in other applications, it just makes it really nice.
For that matter make sure you at least try out MySQL. The latest versions come in a variety of installer packages complete with scripting to make sure the daemon is configured for launch at startup.
There are several versions available, the OS X version comes in a nice mpkg installer as well as a easy to start script which will generate your blowfish key and get you going (that was a nice feature add on). I keep browzar (windows) or safari open to manage the tables and databases while I am working in other applications, it just makes it really nice.
For that matter make sure you at least try out MySQL. The latest versions come in a variety of installer packages complete with scripting to make sure the daemon is configured for launch at startup.
Thursday, September 20, 2007
Blocking your Competition in AdWords
I decided to go ahead and block my competitors from viewing my ads. First off, what does this really mean. The nuts and bolts are that anytime a person types in a word into the Google Search engine, they could possibly see an advertisement for your company, if you have purchased that phrase or term (keyword). If I wanted to be mean, I would click on the ads of my competitors, this is known as a form of "click fraud". I am not suggesting that you go out and start racking up the clicks, in-fact it would be harmful, Google and others have very good systems in place to catch it
Which doesn't stop the "occasional" click from your competitors.
Google does a pretty good job offering a tool in the AdWords application. If you navigate to your AdWords account you can select the "Tools", from there select the "IP Exclusion". Google will allow you to block up to 20 IP addresses, this includes wildcard ranges. So getting started, let us say that Yahoo! is my competition (it's not).
The first thing I would do is to find the IP range of the PICs, the people in charge. Knowing what I know about Yahoo!, they have a Mountainview/San Jose office and (I think it's still there) an Office somewhere in Dallas. Let's find out....this is where you need to know how to use the old whois tool. Whois is a query tool telling you who an IP has been registered to. The main database you want to query is ARIN, the American Registry for Internet Numbers, they will more than likely tell you who has what IP. So how do you find out the IP if you don't know it? Well you will need to do an IP lookup, this can be done sometimes by a straight ping, traceroute, or nslookup.
Now do not just go pinging away at the web address, that may not get you what you need to know! I almost never use the www.foobar.com web address simply because it doesn't always mean what you think it means.
Webservers are not always located at the corporate offices, where the marketing department is probably located! I tend to look an office by mail server, which is also not always at the corporate office, but is more often than not. So lets find Yahoo's corporate office by IP address (if we can). mail.yahoo.com shows up with an IP of 209.191.92.114. Now all I need to do is find that IP in the world. SEOMOZ has a pretty little AJAX tool which can tell us where the IP is geographically located (most of the time).
209.191.92.114 shows up in San Jose, off highway 82. This sounds right, now let's do a whois on that IP. The address comes up in Sunnyvale, not San Jose, but for my purposes it's close enough. Now lets look at the whois:
Search results for: 209.191.92.114
OrgName: Yahoo!
209.191.64.*
209.191.65.*
etc
That should do it....enjoy
Which doesn't stop the "occasional" click from your competitors.
Google does a pretty good job offering a tool in the AdWords application. If you navigate to your AdWords account you can select the "Tools", from there select the "IP Exclusion". Google will allow you to block up to 20 IP addresses, this includes wildcard ranges. So getting started, let us say that Yahoo! is my competition (it's not).
The first thing I would do is to find the IP range of the PICs, the people in charge. Knowing what I know about Yahoo!, they have a Mountainview/San Jose office and (I think it's still there) an Office somewhere in Dallas. Let's find out....this is where you need to know how to use the old whois tool. Whois is a query tool telling you who an IP has been registered to. The main database you want to query is ARIN, the American Registry for Internet Numbers, they will more than likely tell you who has what IP. So how do you find out the IP if you don't know it? Well you will need to do an IP lookup, this can be done sometimes by a straight ping, traceroute, or nslookup.
Now do not just go pinging away at the web address, that may not get you what you need to know! I almost never use the www.foobar.com web address simply because it doesn't always mean what you think it means.
Webservers are not always located at the corporate offices, where the marketing department is probably located! I tend to look an office by mail server, which is also not always at the corporate office, but is more often than not. So lets find Yahoo's corporate office by IP address (if we can). mail.yahoo.com shows up with an IP of 209.191.92.114. Now all I need to do is find that IP in the world. SEOMOZ has a pretty little AJAX tool which can tell us where the IP is geographically located (most of the time).
209.191.92.114 shows up in San Jose, off highway 82. This sounds right, now let's do a whois on that IP. The address comes up in Sunnyvale, not San Jose, but for my purposes it's close enough. Now lets look at the whois:
Search results for: 209.191.92.114
OrgName: Yahoo!
OrgID: YAOOThe part we are interested in for blocking is:
Address: 701 First Ave
City: Sunnyvale
StateProv: CA
PostalCode: 94089
Country: US
NetRange: 209.191.64.0 - 209.191.127.255
CIDR: 209.191.64.0/18
NetName: A-YAHOO-US3
NetHandle: NET-209-191-64-0-1
Parent: NET-209-0-0-0-0
NetType: Direct Allocation
NameServer: NS1.YAHOO.COM
NameServer: NS2.YAHOO.COM
NameServer: NS3.YAHOO.COM
NameServer: NS4.YAHOO.COM
NameServer: NS5.YAHOO.COM
Comment:
RegDate: 2005-05-20
Updated: 2005-07-21
RAbuseHandle: NETWO857-ARIN
RAbuseName: Network Abuse
RAbusePhone: +1-408-349-3300
RAbuseEmail: network-abuse@cc.yahoo-inc.com
OrgAbuseHandle: NETWO857-ARIN
OrgAbuseName: Network Abuse
OrgAbusePhone: +1-408-349-3300
OrgAbuseEmail: network-abuse@cc.yahoo-inc.com
OrgTechHandle: NA258-ARIN
OrgTechName: Netblock Admin
OrgTechPhone: +1-408-349-3300
OrgTechEmail: netblockadmin@yahoo-inc.com
# ARIN WHOIS database, last updated 2007-09-20 19:10
NetRange: 209.191.64.0 - 209.191.127.255
which is ALOT of addresses, but you could enter up to 20 of these ranges in AdWords by entering them in the AdWords list as:209.191.64.*
209.191.65.*
etc
That should do it....enjoy
Wednesday, September 19, 2007
Lessons Learned with Airport Extreme Base Station
So I made the decision to use the built in functionality of the Airport Extreme Base Station's print and file sharing services. I went to Best Buy yesterday, shelled out my $179.00 and walked out. I hooked up a 500GB USB drive and went about my merry way to finally having a flexible NAS.
Setup for the Base Station was easy. I did some homework first though and here are some suggestions if you want to set one up yourself. I have what I like to call Acquired Knowledge - This means through trial and error I can save you a few steps.
1.RTFM - Read the Fine Manual. This comes in PDF from Apple's Site and as a paper manual in the box.
2. Format the disk using something other than Windows. I had multiple problems when the disk was formatted using NTFS, FAT32, and UDF. There were nothing but errors when using NTFS, FAT32 does not (beyond theoretical) support disks larger than 32GB, and UDF simply wasn't seen. I finally used Apple Disk Utility in 10.3.9 and from that moment on the Airport Software could see, mount, and share the disk.
3. Copy files via USB prior to establishing connection. I timed the transfer of my iTunes Library (26GB). It took 3 hours over G, 2.5 over N, and roughly 30 min over USB2.0.
4. Make sure to do all the updates to the Airport Base Station prior to disk install, this includes the firmware.
5. Make sure your wireless cards support WPA2. This is critical if you want to move on to something better than WEP.
6. For Heaven's Sake - Secure your network. Don't leave your network open, especially with the optional disk and printers attached. Password protect your network as well as your disk. Personally I did not use the "workgroup" option when setting up the disk.
Enjoy
Setup for the Base Station was easy. I did some homework first though and here are some suggestions if you want to set one up yourself. I have what I like to call Acquired Knowledge - This means through trial and error I can save you a few steps.
1.RTFM - Read the Fine Manual. This comes in PDF from Apple's Site and as a paper manual in the box.
2. Format the disk using something other than Windows. I had multiple problems when the disk was formatted using NTFS, FAT32, and UDF. There were nothing but errors when using NTFS, FAT32 does not (beyond theoretical) support disks larger than 32GB, and UDF simply wasn't seen. I finally used Apple Disk Utility in 10.3.9 and from that moment on the Airport Software could see, mount, and share the disk.
3. Copy files via USB prior to establishing connection. I timed the transfer of my iTunes Library (26GB). It took 3 hours over G, 2.5 over N, and roughly 30 min over USB2.0.
4. Make sure to do all the updates to the Airport Base Station prior to disk install, this includes the firmware.
5. Make sure your wireless cards support WPA2. This is critical if you want to move on to something better than WEP.
6. For Heaven's Sake - Secure your network. Don't leave your network open, especially with the optional disk and printers attached. Password protect your network as well as your disk. Personally I did not use the "workgroup" option when setting up the disk.
Enjoy
Monday, September 17, 2007
Using Automator to Show Hidden Files in BSD Filesystem
This has probably been posted before somewhere else, but I found it to be useful to move it beyond the standard shell script.
At times I want to see the hidden files in Mac OS X, at other times my OCD of being organized makes me crazy and I have to turn them off. This is a perfect example of using OS X's built in functionality to simplify my life by making these easy scripts finder plugins using Apple Automator.
The script itself is quite easy:
#!/bin/bash
defaults write com.apple.finder AppleShowAllFiles FALSE
killall Finder
to Hide Files
#!/bin/bash
defaults write com.apple.finder AppleShowAllFiles TRUE
killall Finder
to Show Files
Now use these in Automator:
Save them as a plugin for the Finder:
and you are good to go:
This can be a real time saver. This does however kill the Finder, (killall Finder), which will cause Finder to relaunch, but this has not been a problem since 10.2.
Enjoy!
At times I want to see the hidden files in Mac OS X, at other times my OCD of being organized makes me crazy and I have to turn them off. This is a perfect example of using OS X's built in functionality to simplify my life by making these easy scripts finder plugins using Apple Automator.
The script itself is quite easy:
#!/bin/bash
defaults write com.apple.finder AppleShowAllFiles FALSE
killall Finder
to Hide Files
#!/bin/bash
defaults write com.apple.finder AppleShowAllFiles TRUE
killall Finder
to Show Files
Now use these in Automator:
Save them as a plugin for the Finder:
and you are good to go:
This can be a real time saver. This does however kill the Finder, (killall Finder), which will cause Finder to relaunch, but this has not been a problem since 10.2.
Enjoy!
Friday, September 14, 2007
More Fun with Lynx
I grew up using gopher servers before there was a www or http, so when the real "web" came along it was needless to say awesome. One of the first web browsers I used was Lynx.
Lynx is a very very simple browser, very useful in scripts and for checking to see how a search engine views the webpage. If lynx cannot see your content, it is very doubtful that a seach-bot will see it too.
So the last post shows how to use lynx to call Google's caching times. This will show you how you can automate lynx to do automatic retrieval of web information for you.
Here is a simple script which will read a file in line by line and pass the information off to lynx for a Google search.
#!/bin/bash
cat ${1} | while read mySearchTerm; do
lynx -source -accept_all_cookies "http://www.google.com/search?q=$mySearchTerm"
done
This script will throw everything to the standard out. What I do is pass this information on to a text file or to grep for counting purposes.
#!/bin/bash
cat ${1} | while read mySearchTerm; do
lynx -source -accept_all_cookies "http://www.google.com/search?q=$mySearchTerm" |grep -c 'pattern.to.count'>> /path/to/text/file.txt
done
and now we have auto document retreival from Google. A word of warning, because this will take whatever is in the line, you must be careful of non-alpha numeric characters like !@#$%^&*-\/, as these will be passed on to Google too, which can alter the search results. You can also use things like the 'date' command or other small *nix programs to alter the url fed to lynx. If you want to time this sort of script you can always use crontab functionality found in unix, linux, os x. Be sure to read up on the man page for lynx.
Enjoy.
Lynx is a very very simple browser, very useful in scripts and for checking to see how a search engine views the webpage. If lynx cannot see your content, it is very doubtful that a seach-bot will see it too.
So the last post shows how to use lynx to call Google's caching times. This will show you how you can automate lynx to do automatic retrieval of web information for you.
Here is a simple script which will read a file in line by line and pass the information off to lynx for a Google search.
#!/bin/bash
cat ${1} | while read mySearchTerm; do
lynx -source -accept_all_cookies "http://www.google.com/search?q=$mySearchTerm"
done
This script will throw everything to the standard out. What I do is pass this information on to a text file or to grep for counting purposes.
#!/bin/bash
cat ${1} | while read mySearchTerm; do
lynx -source -accept_all_cookies "http://www.google.com/search?q=$mySearchTerm" |grep -c 'pattern.to.count'>> /path/to/text/file.txt
done
and now we have auto document retreival from Google. A word of warning, because this will take whatever is in the line, you must be careful of non-alpha numeric characters like !@#$%^&*-\/, as these will be passed on to Google too, which can alter the search results. You can also use things like the 'date' command or other small *nix programs to alter the url fed to lynx. If you want to time this sort of script you can always use crontab functionality found in unix, linux, os x. Be sure to read up on the man page for lynx.
Enjoy.
Wednesday, September 12, 2007
Quick Check of Google Crawl
If you are not using Google's Webmaster tools this is a quick BASH script which can check the spider rate.
type cache:your.website.here in a google search
note the return URL in the browser - save this (must have the IP)
#!/bin/bash
set -o errexit
stamp=`date`
touch temp.txt
lynx -dump -accept_all_cookies "cached.url.here" | grep 'retrieved' | cut -c 4-50>>temp.txt
cache=`cat rm -rf temp.txt
echo $stamp Google $cache>>/path/to/desired/dir/file.txt
Check out the documentation on the cut command which gets the info from grep, this truncates the number of characters passed to the temp.txt, adjust to what you need to get the desired result.
this should give you a return result like this:
Fri Aug 31 14:18:05 CDT 2007 Google retrieved on Aug 30, 2007 13:49:11 GMT.
Tue Sep 4 09:10:20 CDT 2007 Google retrieved on Aug 31, 2007 14:35:14 GMT.
Wed Sep 5 07:51:55 CDT 2007 Google retrieved on Sep 2, 2007 15:52:02 GMT.
Thu Sep 6 13:01:19 CDT 2007 Google retrieved on Sep 4, 2007 22:35:39 GMT.
Fri Sep 7 07:00:00 CDT 2007 Google retrieved on Sep 5, 2007 13:25:22 GMT.
Sat Sep 8 07:00:00 CDT 2007 Google retrieved on Sep 6, 2007 13:28:59 GMT.
Sun Sep 9 07:00:00 CDT 2007 Google retrieved on Sep 8, 2007 08:19:05 GMT.
Mon Sep 10 07:00:00 CDT 2007 Google retrieved on Sep 8, 2007 08:19:05 GMT.
Tue Sep 11 07:00:00 CDT 2007 Google retrieved on Sep 10, 2007 08:54:21 GMT.
Wed Sep 12 07:00:00 CDT 2007 Google retrieved on Sep 10, 2007 23:52:44 GMT.
a quick a dirty log of when Google Crawled my site, I then just threw this to crontab to run every morning at 4am, and my browser is set to open this link upon activation.
Enjoy.
type cache:your.website.here in a google search
note the return URL in the browser - save this (must have the IP)
#!/bin/bash
set -o errexit
stamp=`date`
touch temp.txt
lynx -dump -accept_all_cookies "cached.url.here" | grep 'retrieved' | cut -c 4-50>>temp.txt
cache=`cat
echo $stamp Google $cache>>/path/to/desired/dir/file.txt
Check out the documentation on the cut command which gets the info from grep, this truncates the number of characters passed to the temp.txt, adjust to what you need to get the desired result.
this should give you a return result like this:
Fri Aug 31 14:18:05 CDT 2007 Google retrieved on Aug 30, 2007 13:49:11 GMT.
Tue Sep 4 09:10:20 CDT 2007 Google retrieved on Aug 31, 2007 14:35:14 GMT.
Wed Sep 5 07:51:55 CDT 2007 Google retrieved on Sep 2, 2007 15:52:02 GMT.
Thu Sep 6 13:01:19 CDT 2007 Google retrieved on Sep 4, 2007 22:35:39 GMT.
Fri Sep 7 07:00:00 CDT 2007 Google retrieved on Sep 5, 2007 13:25:22 GMT.
Sat Sep 8 07:00:00 CDT 2007 Google retrieved on Sep 6, 2007 13:28:59 GMT.
Sun Sep 9 07:00:00 CDT 2007 Google retrieved on Sep 8, 2007 08:19:05 GMT.
Mon Sep 10 07:00:00 CDT 2007 Google retrieved on Sep 8, 2007 08:19:05 GMT.
Tue Sep 11 07:00:00 CDT 2007 Google retrieved on Sep 10, 2007 08:54:21 GMT.
Wed Sep 12 07:00:00 CDT 2007 Google retrieved on Sep 10, 2007 23:52:44 GMT.
a quick a dirty log of when Google Crawled my site, I then just threw this to crontab to run every morning at 4am, and my browser is set to open this link upon activation.
Enjoy.
Tuesday, September 11, 2007
Six Years Ago Today
Six years ago today I was walking into my office at American Airlines in Fort Worth Texas, passing by the media room, just in time to look up and see United Flight 175 hit the south tower.
Our world changed that day, possibly forever.
Take a moment and remember all of those who have suffered due to this event. Take a moment to remember the victims, their families, and the solders who went to war for them.
Our world changed that day, possibly forever.
Take a moment and remember all of those who have suffered due to this event. Take a moment to remember the victims, their families, and the solders who went to war for them.
Friday, September 07, 2007
Guild Wars Wiki Joins Google Toolbar
I decided that it was time to add y.a. button to my toolbar, this time to supplement the vast amount of time I waste playing my only online RPG, Guild Wars. I just have many more things to look up now that GWEN has shipped.
This particular toolbar button has several features, instead of navigating within the official wiki, it has the most common links built in, skills, elite skills, missions, quests, and maps. This button also can utilize the Google Toolbar for searching as the search box feeds directly to the wiki search engine. You can also highlight text and pass it to the wiki search engine as well.
I hope this gets you lots of drops! Tested in FF 2.0/IE6&7 on OS X, XP.
Guild Wars Toolbar Button
This particular toolbar button has several features, instead of navigating within the official wiki, it has the most common links built in, skills, elite skills, missions, quests, and maps. This button also can utilize the Google Toolbar for searching as the search box feeds directly to the wiki search engine. You can also highlight text and pass it to the wiki search engine as well.
I hope this gets you lots of drops! Tested in FF 2.0/IE6&7 on OS X, XP.
Guild Wars Toolbar Button
Keyword Change Logs
This is my first official (but certainly not the last) gripe about Google AdWords. We all know that Google has done a lot (just look at my previous post), but tracking keywords to me is very similar to project management or software development.
It needs a CVS! Please!
The fact that the software doesn't have a way to log changes means to me is that I must have a great memory and I must be constantly sending emails to coworkers about changes. Why is this so important you say? Imagine this if you will. I am the only SEO/SEM at my office. Now imagine that I perhaps DO keep track of my changes to the keyword/ppc campaigns in Google on little stickies of paper all over my desk. Now imagine if you will that I have been doing this for years. Today I get hit by a bus, and all that institutional knowledge is lost to the patterns of the universe never to be seen again. This should make you shudder (if you are not thinking of personnel loss in your disaster recovery plan you really need to address the issue).
Have a CVS, even if it is a notepad like application. Now what would be really nice is to be able to track changes to keywords, ads, campaigns, bids, etc in the Google applications. The stand alone client allows you to batch and to peg a note, but no other client can see those notes, they are saved (somewhere) on the local device. Come on! A SQL table is not that hard to add to the AdWords package. Instead of integrating a group-ware check-in/check-out system, I am left to create one.
Grrrr...and don't tell me it will be in Urchin 6, I hear that from too many people.
Ok - this is me stepping of my soap box.
It needs a CVS! Please!
The fact that the software doesn't have a way to log changes means to me is that I must have a great memory and I must be constantly sending emails to coworkers about changes. Why is this so important you say? Imagine this if you will. I am the only SEO/SEM at my office. Now imagine that I perhaps DO keep track of my changes to the keyword/ppc campaigns in Google on little stickies of paper all over my desk. Now imagine if you will that I have been doing this for years. Today I get hit by a bus, and all that institutional knowledge is lost to the patterns of the universe never to be seen again. This should make you shudder (if you are not thinking of personnel loss in your disaster recovery plan you really need to address the issue).
Have a CVS, even if it is a notepad like application. Now what would be really nice is to be able to track changes to keywords, ads, campaigns, bids, etc in the Google applications. The stand alone client allows you to batch and to peg a note, but no other client can see those notes, they are saved (somewhere) on the local device. Come on! A SQL table is not that hard to add to the AdWords package. Instead of integrating a group-ware check-in/check-out system, I am left to create one.
Grrrr...and don't tell me it will be in Urchin 6, I hear that from too many people.
Ok - this is me stepping of my soap box.
Wednesday, September 05, 2007
Google Toolbar AdWords Button
I wrote this to end some serious frustration in searching my employer's ad campaigns. It might be helpful to anyone who uses Google AdWords. This is a quick and dirty Google button for sending search data to the built in search engine and sending highlighted text in the browser to the same engine. Also this will allow quick navigation to the in menu tools which are usually several clicks to get to in the menu options in the AdWords/Analytics menu bar.
You can get the software installed by going to the main page and clicking under the links for the Google Buttons. This will require the latest version (4 I think) of the Google Toolbar
or you can get my Google AdWords Button Here
You can get the software installed by going to the main page and clicking under the links for the Google Buttons. This will require the latest version (4 I think) of the Google Toolbar
or you can get my Google AdWords Button Here
My Increasing Transition Away From Yahoo!
I would first like to say that I have been a Yahoo! user for over a decade. In terms of the internet that is an eternity. I started using yahoo search when the only browser truly available was Lynx (which I still use from time to time).
I attended the SES conference and expo in San Jose this weekend, and aside from not visiting a friend at TiVO, I had a great time, learned a lot and witnessed the ultimate corporate party.....The Google Dance 2007....
Without going into too much detail (Kimber I want my photo please), I learned about what type of sheer geniuses Google tends to hire. I consider myself pretty bright. I went to college at 15 for engineering (I went back to high school after learning that college wasn't for me yet), I have completed a BA and a MA and even managed to get published. I will get a PhD at some point as well. None of this compares with the outside of the box thinking and mentality of the standard employee of Google. After being really impressed with some of the things which Google has been spending time on lately (like the 700 MHz auction), I am more inclined to check out the newer technologies coming down the pipe from Google. This will lead us to Google Labs.
If you haven't been to Google Labs recently, take a peek over there. Check out the new ideas in search engine results. More over check out the Firefox extensions. If you add the kind of functionality of Firefox in general, the Google Toolbar, and the Google Toolbar API for custom buttons, I find myself needing the Yahoo! services less and less. Last night I exported all of my bookmarks, which I have collected over several years, out of Yahoo! and into Google. It was seamless and painless. With the addition of services like Plaxo (despite what ever controversy there may be), I am finding my internet life more and more integrated with my everyday needs.
I will be the first to say perhaps Google is in fact the new Borg, but unlike it's predecessor, it actually takes into account what I want and what I might need, instead of forcing something down my throat. I can accept a certain level of dissatisfaction if my needs are being met, as of yet though, I am not dissatisfied with the general nature of Google's Services (including Analytics) or their mentality towards their users, and my needs are being met and perhaps even predicted.
I attended the SES conference and expo in San Jose this weekend, and aside from not visiting a friend at TiVO, I had a great time, learned a lot and witnessed the ultimate corporate party.....The Google Dance 2007....
Without going into too much detail (Kimber I want my photo please), I learned about what type of sheer geniuses Google tends to hire. I consider myself pretty bright. I went to college at 15 for engineering (I went back to high school after learning that college wasn't for me yet), I have completed a BA and a MA and even managed to get published. I will get a PhD at some point as well. None of this compares with the outside of the box thinking and mentality of the standard employee of Google. After being really impressed with some of the things which Google has been spending time on lately (like the 700 MHz auction), I am more inclined to check out the newer technologies coming down the pipe from Google. This will lead us to Google Labs.
If you haven't been to Google Labs recently, take a peek over there. Check out the new ideas in search engine results. More over check out the Firefox extensions. If you add the kind of functionality of Firefox in general, the Google Toolbar, and the Google Toolbar API for custom buttons, I find myself needing the Yahoo! services less and less. Last night I exported all of my bookmarks, which I have collected over several years, out of Yahoo! and into Google. It was seamless and painless. With the addition of services like Plaxo (despite what ever controversy there may be), I am finding my internet life more and more integrated with my everyday needs.
I will be the first to say perhaps Google is in fact the new Borg, but unlike it's predecessor, it actually takes into account what I want and what I might need, instead of forcing something down my throat. I can accept a certain level of dissatisfaction if my needs are being met, as of yet though, I am not dissatisfied with the general nature of Google's Services (including Analytics) or their mentality towards their users, and my needs are being met and perhaps even predicted.
Tuesday, September 04, 2007
Smultron - How very nice
I decided during the redesign that the articles I write now should be about concepts, things, and processes which add value to my life. Being that I am mostly an Apple/OS X/Unix/Linux die hard I figured I would start with a little piece of software a buddy of mine found at work (great find Nate).
I was looking for a BBEdit replacement. This is not to say that I think BBEdit is not worth every single penny they charge, in fact I would rather spot my "It Doesn't Suck" shirt all day, but I do not have purchase authority at my new employer. I simply needed a nice color coded code editor for those quick and simple edits (notice I said color-coded, I know text editors are all over the place). I wanted to try to stay out of the terminal running vi or emacs this time and try some new apps, enter Smultron written by Peter Borg.
The first thing I like about this software, it's free. I did not have to email my new manager asking for a copy of BBEdit or worse yet, Dreamweaver (what the other guys are using). The interface is simple, effective and very OS X intuitive. It seems to have everything just where I need it, without having to move my lazy hand attached to my multi-button, preprogrammed, can access everything with my thumb mouse (no seriously I'm that lazy when it comes to ease of use).
I opened up several files at once and instantly fell in love with the file/window navigator. It reminds me very much of the rawer functionality of preview (by far one of my most favored applications). Color coded text predesigned to cover the basics (for me that was PHP and some PERL). I have not tried any python yet, but it handled HTML and XML without problems. I was also intrigued by some of the functionality including but not limited to partial applescript support, multi document search with grep use, small snippet support, and it can be used as an external editor.
Not bad for FREE huh. There are many other neat-o things you can do with Smultron, I just haven't had the need yet. If you want to find out more just visit the sourceforge page for Smultron, the nifty free text editor I am rapidly falling in love with.
I was looking for a BBEdit replacement. This is not to say that I think BBEdit is not worth every single penny they charge, in fact I would rather spot my "It Doesn't Suck" shirt all day, but I do not have purchase authority at my new employer. I simply needed a nice color coded code editor for those quick and simple edits (notice I said color-coded, I know text editors are all over the place). I wanted to try to stay out of the terminal running vi or emacs this time and try some new apps, enter Smultron written by Peter Borg.
The first thing I like about this software, it's free. I did not have to email my new manager asking for a copy of BBEdit or worse yet, Dreamweaver (what the other guys are using). The interface is simple, effective and very OS X intuitive. It seems to have everything just where I need it, without having to move my lazy hand attached to my multi-button, preprogrammed, can access everything with my thumb mouse (no seriously I'm that lazy when it comes to ease of use).
I opened up several files at once and instantly fell in love with the file/window navigator. It reminds me very much of the rawer functionality of preview (by far one of my most favored applications). Color coded text predesigned to cover the basics (for me that was PHP and some PERL). I have not tried any python yet, but it handled HTML and XML without problems. I was also intrigued by some of the functionality including but not limited to partial applescript support, multi document search with grep use, small snippet support, and it can be used as an external editor.
Not bad for FREE huh. There are many other neat-o things you can do with Smultron, I just haven't had the need yet. If you want to find out more just visit the sourceforge page for Smultron, the nifty free text editor I am rapidly falling in love with.
Sunday, September 02, 2007
Subscribe to:
Posts (Atom)