Showing posts with label OS X. Show all posts
Showing posts with label OS X. Show all posts

Thursday, March 01, 2012

Getting to files from a ClockworkMod backup in OSX

I use ClockworkMod to backup my EVO. It's nice because I have the space and backups are something everyone should do at some point, especially if you are rooted and like to play in the OS. This functionality is provided by ClockworkMod, which has all sorts of features which I will not go into here, but suffice to say it's worth it to install.

This bring up the question of retrieving information from the backups though. How does one go about this, especially in an OSX environment. Well we need to discuss a few things first, before diving into the solution. Android OS as well as ClockworkMod uses YAFFS (yet another flash file system). When clockworkMod backs up these files and directories, it saves them in the .IMG format. This is not to be confused with the NDIF files of old on Classic Mac OS. They will not mount directly. You will need to convert the file system first. Let's discuss this part now.

You will need to go to the following Google Code Site for UnYAFFS. Download the unyaffs.c; unyaffs.h, and unyaffs files. Save these files to your local drive somewhere (downloads will work fine). From here open up the terminal application and cd to your downloads directory (or the directory to which you saved your files).

From here you will need to gcc make the unyaffs application. This is simple enough but just for grins we will go through it. Type the following into the terminal command line interface.

sudo gcc -o unyaffs unyaffs.c

This created an executable in your folder. You will want to copy this to your /sbin directory (so that you can execute it without having to put in a full path). If you don't know how to get to your /sbin, you can use the finder and "Go to Folder" under the go menu, just type /sbin. Screen shot included.


Just drag and drop the unyaffs exec to the /sbin directory. Of course you can copy directly to the /sbin in command line if you wish. Now go back to terminal and navigate to your yaffs .img disk image. Use the following command structure to extract all the files in the disk image.

unyaffs diskImage.img

This will begin the extraction process. When you see a message in terminal "end of image", the extraction is complete. This was tested in OS X 10.7.2 and I assume it will work in most distros of Linux. 

Thursday, October 06, 2011

My Apple Story

To all of you in Cupertino and Austin and to the family of Steve, my heart and thoughts are with you.

The world is mourning the passing of Steve Jobs today, and so am I. Steve passed away yesterday after a long battle with cancer. For those of you who use apple products, the iPhone, iPad, iPod, Macs, this is a sad day. For those of us who spent time at Apple, it's almost a personal loss.

My first computer was a Commodore 64 with dual floppy (5.25) drives. It was soon and quickly replaced with the first Macintosh. I was in love. I bounced from the 128 to the 512k model in a year, then to the SE, SE/30, and ultimately to the LC I. My first personal computer at home was the LCII followed by a Performa 6116CD (my first PPC chip), 8150 WGS, beige G3/233, and finally Blue&white G3 with a G4 upgrade. Twenty years of history and memories defined by the Apple products I used at home and at work. I learned almost everything I know about computing (which is a lot, but not exhaustive) on the Macintosh Platform.

In 1995 I started a contract with Apple at a call center for the AAC (I still have my 1990s Apple Assistance coffee cup). I worked for Apple during the dark times. The times of licensing the OS, clones, Pippen, OS 7.5, and Gil Amelio. It was a hard time for Apple, there was a loss of focus on innovation, easy of use, and technology. Stock prices and market share were at an all time low and nay-sayers were constantly predicting the closure of Apple's doors. Yet there was always this spirit of hope. In the last few months of my tenure there, Apple bought NeXT, and it was announced that Steve would be returning as a consultant. I left that job to go work elsewhere. I received my hardware and software certifications during that time and was happy to see things beginning to change.

Then the most amazing thing happened, iMac. Ditch the beige boxes of yesterday and make the internet and home computing fun again for the everyday person. Apple stopped trying to compete with the PC market and made something fun and interesting again. iTunes and iPod were just around the corner bringing to the masses the wonders of mp3s. iPod was not the first mp3 player. I owned a Rio (which worked with iTunes at the time), but in my opinion, the iPod changed the world. Mobile devices were never so easy to use, so well integrated with the GUI and operating environment.

With the dissolving of the AIM alliance and the release of OSX, Macintosh computers exploded on the scene as well. Innovation and design was alive again at Apple.

I could go on about the innovation of the iPhone and iPad as well, but those are much less personal to me. I turned many people to the Mac platform. I've used it for 27 years now.

The world became a better place for the technology and innovation developed by the Steves (yes I hold Woz in the same regard). My only hope is that Apple continues to foster and nurture the abilities of other visionaries and innovators.

Thank you Steve, you will be missed. Rest now our friend.

Thursday, March 25, 2010

eCalc



This last weekend I was working on some ANOV problems for my class. I found that I was having problems looking at my handheld calculator (only purchased for my proctored exams) and the standard calculator on windows was lacking a square root function.Now I know I can get around it by giving a power of .5 to whatever number to which I'm trying to find the root, but I find this silly. It's literally more work. This is why software was created.

This annoyance put me on a 30 minute chase on the web to find a software calculator which is not connection dependent. I wanted to find one which was available on the web for both windows and OS X. Someone read my mind.

I found eCalc. Web based, OS X dashboard ready, runs in windows. I found it nice, easy, and intuitive. This is great software! It does the following effortlessly:

Scientific Functions (Algebra, Trigonometry, Engineering)
RPN or Algebraic Operating Modes
Interactive Unit Converter
Linear and Root Equation Solver
Complex Number Math with Polar and Rectangular Formats
Drop-Down Stack with History
Interactive Decimal to Fraction Converter
Free Online Calculator
Windows Desktop Version (Win98,ME,NT4,2k,XP,Vista) (Also works in Win 7-64b)
Mac OS X Dashboard Version

Plus: A square root button...I'm so easily entertained.

$14.95. Done. Sold. My handheld crappy TI-blah-blah cost me $9 at target. I have to admit I do like well designed software and I have a tendency to purchase based on functionality and design and this calculator won my devotion on both fronts. There is even an iPhone app ready and available.

Saturday, December 15, 2007

DMG Converter for OS X


DMG Converter is rapidly becoming my archiving tool of choice for OS X. With the exception of creating encrypted images, this nifty and free utility does just about everything else. It currently has support for just about every compression type across most major formats.

I personally use it to archive those CDs/DVDs which manufacturers still seem to want to send out. I prefer the dmg format, but sometimes when you need some cross platform support an .iso is the only way to go. The real beauty of this application is that if you are like me and are running XP/Vista in parallel, you will need a quick and easy way to get the iso loaded, and without having to stop what you are doing, and go searching for a physical media like a CD or DVD. This makes it easy to load in parallels when you need to get something done quickly. I have included screen shot of the types of images the converter accepts and understands. This application is written for Tiger (10.4.x) but so far I have had no issues running it in Leopard (10.5.1).

Thursday, December 13, 2007

Advanced Web Ranking

Advanced Web Ranking is a SEO tool devised specifically for SEO/SEM analysts and is available on Mac OS X, Linux, and Windows . It requires the JRE engine to work but it utilizes the local environment so that the user interface is very seem-less. I will be discussing the Mac version, although the topics will apply to all versions.

The primary reason I use this tool (which is still in demo mode) is for determining the placement of my keywords from Google search results. I also check the sites of all my competitors as well as the sites of those I think will rank highly in Google based on relevancy. Since it tracks the information over time, I can see how I am doing and how the changes I am making to the sites are reflected (if at all) in Google SERPs. There are some serious things I like about this software and I will go through them first.

1. Import initial lists of keywords based on lists - The software will take a list of keywords during project creation. This saves a load of time. I keep all my keywords in a text file anyway as I am constantly monitoring and changing the words. Con - I must change the keyword in the application, which for me means that I must update the master list (regardless of if it is a text file or a db) as well as the application.

2. The application will import a list of websites to check the search engine for regarding site placement. - This is also a great tool. See #1 above for pros and cons.

3. AWR can update multiple projects at a time - Either through a cron (which if you know me you know I love to put things under Cron's control) or using it's manual process. The only downside to using the manual process is the time it takes. Utilizing a cron job (which the built in scheduler interfaces with quite easily) is more simplistic and can be done at intervals to maximize your CPU time (weekends, nights, etc)

4. Output formats are universal for reporting
- this includes CSV, Excel CSV, and XML. I utilize the CSV output for easy upload into a SQL DB. Once a week my reports are generated after an update, and uploaded into a DB for easy lookup and for data redundancy efforts. This is great because I do not have to rely on a localized db (like FileMaker or mySQL) running on my machine. The DB I upload to is tape archived and on a very good UPS. The data is in two locations and easily accessible for multiple persons.

5. The contextual menus for keywords - will allow you to jump right to the landing page for the site and keyword or jump to the SERP as it is at the current moment in time, which can tell you quickly if your results have changed.

Some of the larger cons for AWR are mainly my own issues. I would like to see a toggle in the keyword listing for showing just the keyword selected in the primary list. This would save me from having to do a regular expression search using the built in search bar for keywords. I would also like to see the application track (again a radio button or a check box) that the word is a paid term or paid keyword. This way I could tell quickly to stop paying for a #1 or #2 keyword.

All in all this is a good application. It is not what I am currently seeking in the entirety, which would be a keyword management tool which could auto import from multiple locations, but this will do in getting me relevant data to analyze in my SEO roles.

- Chris Copeland

Thursday, November 29, 2007

Google Gadgets on OS X Dashboard

So Google announced this yesterday, but pushed implementation to today (11/29/07). I installed it as soon as it was allowed.

This is not quite what I expected....

Although useful, the gadgets available for OS X implementation are not as available as they are for iGoogle, which I use frequently to monitor different data sets and to do lists.

However, even given this, it will make development a since for people wanting to hit both the OS X dashboard and the Google desktop markets.

The installer is only available through the "Google Updater.app", which comes with the 1.4.0.838 build of the Google Desktop for Mac OS X.

Wednesday, November 07, 2007

Securing your files in OS X

If you are like me, you are concerned about privacy and security regarding your files. In OS X you can use Apple's FileVault, which will encrypt the entire disk, or you can rely on 3rd party applications to secure individual files for you.

If you want though, there is a much easier way. Apple's Disk Utility will create 256bit AES encrypted disks for you, which of course are images, so you can read/write/and keep safe.

This is a perfect way to keep files compressed, together, organized, and encrypted. It also makes it much easier to back them up to NAS, DVD, or using any of a number of backup utilities. My only suggestion is to use a large password. A friend of mine recently posted that he uses this method to keep his quicken files encrypted. If you are running on a laptop, you would definitely want to keep any financial records, lists of passwords, email, etc, encrypted. If it gets stolen you can always replace the laptop, but you cannot always replace the damage caused by identify theft or credit score loss.

Take it from a guy with bachelor's degree in criminology, a master's degree in criminology, and a professional information security certification, you need to keep private or sensitive data encrypted.


Monday, October 29, 2007

Apache2 and Personal Web Browsing in Leopard

The answer to Friday's post is simple. Apple installed Apache 2, which has a different directory structure than that of apache.

The answer can be found on Apple's Forums. Also PHP is turned off by default, so be sure to edit the httpd.conf file to turn it back on.

From the post:
"I got PHP working with Leopard by modifying the httpd.conf file that you can get to by going to Go -> Go To Folder, /etc then going into the apache2 folder and copying httpd.conf to the Desktop (it won't let you edit in place). Find the line that says LoadModule php5_module etc...... and remove the # from the start. Save the file and drag it back into the apache2 folder, you'll have to authenticate to get it in there. Then restart apache by switching personal web sharing off and on in the sharing pref pane.

I had the same problem with my personal web sharing folder for my username not working on both the machines I installed it on (as an upgrade). The machines web sharing is working tho, just not the one for each user account. To fix it, create a file called shortusername.conf (where shortusername is your shortusername, eg, joebloggs and in it put this...

Directory "/Users/shortusername/Sites/"
Options Indexes MultiViews
AllowOverride None
Order allow,deny
Allow from all
/Directory

Save the file and put it into the apache2/users folder - restart personal web sharing and boom!, its working now."

I did all this in the terminal, sudo of course.

I hope this helps.

Friday, October 26, 2007

Apache post Leopard 10.5 install

The upgrade itself went fine. However, now I cannot see in a browser:
http://localhost/~ChrisCopeland

http://localhost works fine

I checked out 2-3 other leopard upgrades - it's consistent on all machines.

My (current) permissions are:
drwxrwxrwx+ 14 ChrisCopeland ChrisCopeland 476 Oct 26 14:20 Sites

Apple tech support thought I was on server. The dude was not aware that apache comes on all installs. Once we got over that hurdle he was still a little confused. I looked through the httpd.conf file but could not find anything out of the ordinary.

Can anyone else verify? Got any ideas?

Setup is:
Model Name: Mac Pro
Model Identifier: MacPro1,1
Processor Name: Dual-Core Intel Xeon
Processor Speed: 2.66 GHz
Number Of Processors: 2
Total Number Of Cores: 4
L2 Cache (per processor): 4 MB
Memory: 2 GB
OS X 10.5

Monday, October 22, 2007

How To: The Urchin Data Extractor part 2

Well I said I would publish my script on Friday. Sorry about that, my wife and I moved into a new house over the weekend and I guess I was just a little over zealous.

Anyway. The script. In the past post I showed you how to (basically) setup the urchin script and make the command line calls to it. In this part, I will show you what I did to automate the process for ytd 2007, including all the months up to the current month. I utilized the bash quite a bit here. I'm sure there is a faster way to automate this without using as many tmp files, but I like to keep my data in stages.

Here is the script:
#!/bin/bash
cd /Users/ChrisCopeland/Apps/scripts/urchin
currentMonth=`date '+%m'`
currentdate=`date '+%Y%m%d'`
currentYear=`date '+%Y'`
i=1
let loopVar=currentMonth+1
echo "Monthly Reports Available :"

while [ $i -lt $loopVar ]; do
echo "$i"
let i=i+1
done
read -p "please select the month or enter ytd for year to date: " -e input
case "$input" in
'ytd')
perl u5data_extractor.pl --begin 20070101 --end $currentdate --max 21000 --report 1201 >>fileTMP;
;;
'1')
perl u5data_extractor.pl --begin 20070101 --end 20070130 --max 21000 --report 1201 >>fileTMP;
;;
'2')
perl u5data_extractor.pl --begin 20070201 --end 20070230 --max 21000 --report 1201 >>fileTMP;
;;
'3')
perl u5data_extractor.pl --begin 20070301 --end 20070330 --max 21000 --report 1201 >>fileTMP;
;;
'4')
perl u5data_extractor.pl --begin 20070401 --end 20070430 --max 21000 --report 1201 >>fileTMP;
;;
'5')
perl u5data_extractor.pl --begin 20070501 --end 20070530 --max 21000 --report 1201 >>fileTMP;
;;
'6')
perl u5data_extractor.pl --begin 20070601 --end 20070630 --max 21000 --report 1201 >>fileTMP;
;;
'7')
perl u5data_extractor.pl --begin 20070701 --end 20070730 --max 21000 --report 1201 >>fileTMP;
;;
'8')
perl u5data_extractor.pl --begin 20070801 --end 20070830 --max 21000 --report 1201 >>fileTMP;
;;
'9')
perl u5data_extractor.pl --begin 20070901 --end 20070930 --max 21000 --report 1201 >>fileTMP;
;;
'10')
perl u5data_extractor.pl --begin 20071001 --end 20071030 --max 21000 --report 1201 >>fileTMP;
;;
'11')
perl u5data_extractor.pl --begin 20071101 --end 20071130 --max 21000 --report 1201 >>fileTMP;
;;
'12')
perl u5data_extractor.pl --begin 20071201 --end 20071230 --max 21000 --report 1201 >>fileTMP;
;;
esac

#cleans the slash for easier editing
tr "/" "_" <>fileTMP2
#start of line removal
sed '
s/_index.cfm//
s/_&safe=vss//
s/_&adlt=strict//
s/.cfm//
s/,//
s/\ $//' <>fileTMP3
cat cleanThese | while read line; do
sed -ie "/$line/d" fileTMP3
done
less supplierTMP3 | cut -c 2-500 | grep -v "^ " | grep -v "^_" | grep -v "^#" >>fileTMPws
tr -s " " fileTMPcond
sed 's/ /,/' /Users/ChrisCopeland/Sites/urchinreports/file$currentYear-Month$input.csv
#cleans temp directory
rm fileTMP*
echo "your report is complete"

Now we can go through the script. The first part should be self explanatory, I am setting the values for the current date, month, and year (year is not yet implemented in my script, but will be soon). Then I ask the user for which report (monthly) they would like to generate, and show them a numerical list of the reports available (from 1 to current month). It will be in this case function that I will implement the current year, so that a user can get the month and year data needed. At the moment we are only interested in the year 2007.

Next comes the hairy part - the cleaning of useless data. I have a file for the most common items I want filtered. The file is called cleanThese (simple name). Before I open that file though, I want to clean certain characters and items which get skipped over do to the fact that this log file which is generated, will have a list of urls and paths in it. Paths and urls have weird characters sometimes, like ";" ":" "/" etc. Try passing these into a command line sometime and you will see how troublesome they can be. So let's get them out of there.

tr "/" "_" - replaced "/" with an underscore, which will make it easier to clean the rest of the log.

Now we throw this whole thing to sed - a great program.
sed '
s/_index.cfm//
s/_&safe=vss//
s/_&adlt=strict//
s/.cfm//
s/,//
s/\ $//' <>fileTMP3

we have a lot of oler cold fusion files, and some items that have a problem in the clean file, this sed command, which is a chained command, one per line, cleans these things out and leave nothing in the pattern's place. You can see now why I cleaned out the extra "/", I would have been passing a /// to sed, which it doesn't understand.

Now the cleanThese file:
cat cleanThese | while read line; do
sed -ie "/$line/d" fileTMP3
done
This reads each line of the cleanThese file (which I can modify at my desire) and replaces the entire line of the patten with nothing, effectively removing the line.

Then I want to clean up the formatting from the output of the cleaning:
less fileTMP3 | cut -c 2-500 | grep -v "^ " | grep -v "^_" | grep -v "^#" >>fileTMPws

This line will cut certain characters out (based on the original output), pass these to grep with an inverted search 3 times looking for different patterns, then writes that out to yet another tmp file.

The next line:
tr -s " " fileTMPcond
compresses all the space characters to one, outputs that to another file, which sed will take in, and replace the now single space, with a "," - effectively making this a csv file, which is names with the current month and set to a directory.

The next couple of lines clean the directory of tmp files and report to the user in the shell that the report is ready.

This will leave you a clean, importable, ready to query csv file just aching to be imported into a SQL engine of some sort.

Again if you want to run a different type of report, or use the case statement to generate a set of reports, you can visit my website to find a list of urchin reports available.

I want to find a good way to make this script available in a web interface at some point. I would also like to give the user a list of reports and years at the front of the application, just to help automate the process further.

Please Enjoy!

Thursday, October 18, 2007

How To: The Urchin Data Extractor (u5data_extractor)

You can get the perl scripts for customizing Urchin data at the Google Urchin Support Page. I read the little documentation on this subject, which is a basic how to, without much resource. Urchin support firms charge something serious to get this kind of thing done, and here I am being a nice guy, giving away what I learned FOR FREE.

So let's begin with the lessons I learned.

1. Use some form of linux/unix. I could not, for the life of me, get any of these scripts to work with Windows and I think this is because of the path. The perl script is looking for a unix like path. I'm sure there are those people out there, smarter than I, who can get this to work on a windows server, but I am not one of them. The examples I give will be run from a Macintosh running OS X 10.4.10, ActiveState Perl, and the bash. In addition I would like to thank the wonderful folks (yet again) over at macosxhints forums as well as unix.com forums for helping me get my syntax correct in my scripts.

2. Use a step by step process.

3. Verify your data, and backup! The last thing you want to do is run an untested and "use at your own risk" script on your Urchin reports.

4. Do not always believe the available documentation.

5. When report testing, use small segments of data for your report. It saves time and you get to test your text scrubber faster.

Ok - now let's get to the logical process. What I wanted to do was to pull certain reports from Urchin and post them to a database, preferably some flavor of SQL.

The process will look something like this.
1. run perl script with start date, end date, report type, and number of items returned.
2. save report as a text file
3. scrub text file for bad characters, bad lines, and data which is not applicable.
4. comma delimit the file
5. hand csv file to sql import engine.

sounds easy right? It is for the most part.

The u5data_extractor script will do a lot of this work for you. This is the usage section of the script, which will also show up in the command line if you call the script with ~$ perl u5data_extractor. I removed the copyright and some other text for the purpose of posting to the blog.
###########################################################
# Usage: u5data_extractor.pl [--begin YYYYMMDD] [--end YYYYMMDD] [--help]
# [--language LA] [--max N] [--profile PROFILE]
# [--report RRRR] [--urchinpath PATH]
#
# Where:
# '--begin YYYYMMDD' specifies the starting date (default: one week ago)
# '--end YYYYMMDD' specifies the ending date (default: yesterday)
# '--help' displays this message
# '--language LA' specifies the language for the report. Available
# languages are: ch, en, fr, ge, it, ja, ko, po, sp, and sw
# '--max N' is the maximum number of entries printed in the top 10 report
# types (default is 10).
# '--profile PROFILE' specifies the profile to retrieve data from. The
# default is specified at the beginning of this script
# '--report RRRR is the 4-digit number for the report (default is 1102)
# Run this script with --help to see a list of available reports
# '--urchinpath PATH' specifies the path to the Urchin distribution.
# Note that you can edit the script and set your path as a default
###################################################

Giving the script your default path:
You will need to give the script the path to the Urchin Directory.
this is the line for my machine (following a unix path):
my $urchinpath = "/usr/local/urchin"; # Path to the Urchin distribution

Give the script your default profile:
You will need to give the script the default profile.
This is the line for a made up profile in the script.
my $profile = "My Default Profile"; # Name of the default profile
This is important - you do not have to use %20 to represent spaces if you are using the quotes. Urchin, by default, stores the profile directories with %20 for whitespace characters.

The report number is a difficult thing. Where do you find those reports? I found an article, somewhere, which shows the report numbers. Have no fear, I made a list for you of the urchin report numbers.

I will give an example, since none was really given for me. Let's say I want to run a report from Jan 01, 2007 to Jan 27, 2007 for the report "Visitors & Sessions"
so when you call the script, you will be using the following syntax:
perl u5data_extractor --begin 20070101 --end 20071027 --report 1903 --max 10

this will generate the output to the standard out (screen), which I will not post due to privacy reasons.

If you want to redirect the output feel free to do so
perl u5data_extractor --begin 20070101 --end 20071027 --report 1903 --max 10>>output.file

Tomorrow I will post my scrubbing process as well as the script I used to call backup the data and generate the reports.

Enjoy!

Monday, October 15, 2007

Using Bash Script to check for a server connection in OS X

Many thanks to robinwmills over at macosxhints forums for the assist on this. I needed a script which would check for a server connection, and if that server was not present, attach the server with a name and password.

The issue here is that smb connections from an OS X box to a Samba/Windows box often just quit. It is a known issue requiring you to jump through fiery hoops to re-establish the smb connection. All I needed to do was dump some csv files to a location and let SQL Server 2005 handle the rest.

Here is the code:
#!/bin/bash
#checks to see if share is mounted
stat /L/windows >& /dev/null
if ( $status ) then
echo 'setting up mount on /L'
osascript -e 'mount volume "smb://domain;username:password@10.0.0.0/share"'
else
echo 'alive'
fi
#file manupilation
umount /Volumes/share

From what robinwmills explains:
stat does soemthing like ls - it lists directory/file information. I can't remember why I used stat instead of ls, however I don't think there was any special reason.

Anyway, I have the Share "L" on /Volumes/L (and a sym link /L -> /Volumes/L)

stat /L/windows >& /dev/null

says "pipe stdout and errout from stat to no-where". stat sets $status to 0 for success. So I can test to see if there is a windows directory on L. If you don't use the /L symlink trick, you could do stat /Volumes/L/windows >& /dev/null


What I did was establish the connection (authenticating to the domain controller) for only the amount of time needed to copy the files, then disconnect by umount'ing the volume. I'm sure there was probably a more graceful way of handling the disconnect, but it gets the job done. I had not yet ever used applescript from the shell before, that turned out to be quite handy, as well as the line for connecting to the share with a domain, username, and password. I just passed this off to my local cron bad boy, and this is running like a charm. I have used shell in applescript - I will definitely have to read up more on this.

I found it difficult peering through the search engines for this type of solution, so I will make sure that it get the proper tags.

Enjoy!

Monday, October 08, 2007

PDF in iTunes

The idea of having PDF for my research was certainly not a new idea. I turned several students on to the idea in grad school. I even turned in the CD as my compilation of my research works which was a lot simpler than turning in a few boxes of research paper.

I read this fine article on Make this morning:
Digg user enjayenel wrote "I love using iTunes as a PDF library tool. I have hundreds of PDF manuals that I need organized access to. I just add them into the iTunes Library, edit the ID3 tags, create a smart playlist to group them together, and turn on the Browse feature (command-b) to get quick navigation to the PDF I am looking for based on title, product name (artist tag), and version (album tag). Double clicking the PDF opens it in your default PDF viewer".

Holy crap batman. I can't wait to get home to my several thousand pdf files and use one of the best applications yet. This functionality will change how I do research from now on. I have now read up on (almost) everyone's opinion regarding pdf files in iTunes, and there are some bit I like and I don't like. I think the idea of turning off the "copy to iTunes directory" is a bad thing. Let me tell you why...one word...backups. If you let your iPod auto sync with iTunes, you will automatically have a backup of your work, no more late night crashes, even more so, your information is organized.

To get that information organized we will have to modify the id3 tags in the meta data of the file in iTunes. I also have to agree that PDF support should be a plugin from apple. On the mac, imho, nothing beats preview and it's abilities, Foxit Reader may do it for me, I will test it tonight.

I am now wondering if there is an automated way to import the data, perhaps the title information from the pdf itself. I know this is all just stored in the XML, but there must be a way. I will have to check the automator script archives around the web to see if there is anything useful. Just the use of smart playlists now have me jumping for joy.

Wednesday, October 03, 2007

TuneTagger

TuneTagger
I have been looking for an application which could easily attach lyrics to my mp3 and aac file in iTunes. I read about TuneTagger yesterday, tried it and liked it right off the bat (which is unusual for me).

I like the way TuneTagger checks the tags against the CDDB then displays what is called the "approval pane" which lets you chose which information to embed into the song file.

This even works with non-syncing iPods (at least it works with 5th gen). My iPod is not set to sync, I do it all manually, as long as iTunes plays a song, TuneTagger will verify the tags in the current or previous song.

This is a great app for $17.

Saturday, September 22, 2007

phpMyAdmin

If you are doing any type of web development and you are planning on using a mysql table, do yourself a real favor and install phpMyAdmin. I have been using this solution for a few years now (since the move from postgreSQL to MySQL) and I gotta say it is a really nice tool to have.

There are several versions available, the OS X version comes in a nice mpkg installer as well as a easy to start script which will generate your blowfish key and get you going (that was a nice feature add on). I keep browzar (windows) or safari open to manage the tables and databases while I am working in other applications, it just makes it really nice.

For that matter make sure you at least try out MySQL. The latest versions come in a variety of installer packages complete with scripting to make sure the daemon is configured for launch at startup.

Monday, September 17, 2007

Using Automator to Show Hidden Files in BSD Filesystem

This has probably been posted before somewhere else, but I found it to be useful to move it beyond the standard shell script.

At times I want to see the hidden files in Mac OS X, at other times my OCD of being organized makes me crazy and I have to turn them off. This is a perfect example of using OS X's built in functionality to simplify my life by making these easy scripts finder plugins using Apple Automator.

The script itself is quite easy:
#!/bin/bash
defaults write com.apple.finder AppleShowAllFiles FALSE
killall Finder

to Hide Files

#!/bin/bash
defaults write com.apple.finder AppleShowAllFiles TRUE
killall Finder

to Show Files

Now use these in Automator:


Save them as a plugin for the Finder:


and you are good to go:


This can be a real time saver. This does however kill the Finder, (killall Finder), which will cause Finder to relaunch, but this has not been a problem since 10.2.

Enjoy!

Friday, September 14, 2007

More Fun with Lynx

I grew up using gopher servers before there was a www or http, so when the real "web" came along it was needless to say awesome. One of the first web browsers I used was Lynx.

Lynx is a very very simple browser, very useful in scripts and for checking to see how a search engine views the webpage. If lynx cannot see your content, it is very doubtful that a seach-bot will see it too.

So the last post shows how to use lynx to call Google's caching times. This will show you how you can automate lynx to do automatic retrieval of web information for you.

Here is a simple script which will read a file in line by line and pass the information off to lynx for a Google search.

#!/bin/bash
cat ${1} | while read mySearchTerm; do
lynx -source -accept_all_cookies "http://www.google.com/search?q=$mySearchTerm"
done

This script will throw everything to the standard out. What I do is pass this information on to a text file or to grep for counting purposes.

#!/bin/bash
cat ${1} | while read mySearchTerm; do
lynx -source -accept_all_cookies "http://www.google.com/search?q=$mySearchTerm" |grep -c 'pattern.to.count'>> /path/to/text/file.txt
done

and now we have auto document retreival from Google. A word of warning, because this will take whatever is in the line, you must be careful of non-alpha numeric characters like !@#$%^&*-\/, as these will be passed on to Google too, which can alter the search results. You can also use things like the 'date' command or other small *nix programs to alter the url fed to lynx. If you want to time this sort of script you can always use crontab functionality found in unix, linux, os x. Be sure to read up on the man page for lynx.

Enjoy.

Tuesday, September 04, 2007

Smultron - How very nice

I decided during the redesign that the articles I write now should be about concepts, things, and processes which add value to my life. Being that I am mostly an Apple/OS X/Unix/Linux die hard I figured I would start with a little piece of software a buddy of mine found at work (great find Nate).

I was looking for a BBEdit replacement. This is not to say that I think BBEdit is not worth every single penny they charge, in fact I would rather spot my "It Doesn't Suck" shirt all day, but I do not have purchase authority at my new employer. I simply needed a nice color coded code editor for those quick and simple edits (notice I said color-coded, I know text editors are all over the place). I wanted to try to stay out of the terminal running vi or emacs this time and try some new apps, enter Smultron written by Peter Borg.

The first thing I like about this software, it's free. I did not have to email my new manager asking for a copy of BBEdit or worse yet, Dreamweaver (what the other guys are using). The interface is simple, effective and very OS X intuitive. It seems to have everything just where I need it, without having to move my lazy hand attached to my multi-button, preprogrammed, can access everything with my thumb mouse (no seriously I'm that lazy when it comes to ease of use).

I opened up several files at once and instantly fell in love with the file/window navigator. It reminds me very much of the rawer functionality of preview (by far one of my most favored applications). Color coded text predesigned to cover the basics (for me that was PHP and some PERL). I have not tried any python yet, but it handled HTML and XML without problems. I was also intrigued by some of the functionality including but not limited to partial applescript support, multi document search with grep use, small snippet support, and it can be used as an external editor.

Not bad for FREE huh. There are many other neat-o things you can do with Smultron, I just haven't had the need yet. If you want to find out more just visit the sourceforge page for Smultron, the nifty free text editor I am rapidly falling in love with.