Monday, December 19, 2005

Quick Digital Photo Tip - Fixes, Photo Sizes and Aspect Ratios.

When I first got my digital camera I talked a lot about the Beauty of Mega-Pixels and how more is better. However, as I prepare for upcoming Hanukkah celebrations and pictures, I wanted to share another little digital tip that I have picked up. Typically, with a good ol' film camera, when you go to develop a standard roll of film, you get a set 4"x6" photos. This 'standard' photo size was carried over to the digital age, and you can easily find 4"x6" photo paper, as well as order such sized prints from photo sites like Snapfish, Shutterfly, and Kodak Gallery. Of course the beauty of digital photos is that we only need to print the 'Keepers' (and we can also blow them up and/or put them on a mug, apron, deck of cards, etc.) and we can either discard the others or preserve them in slide shows and on websites for years to come.

What this means is that people tend to take photos indiscriminately with digital cameras. So to all of those who will find a new digital camera under the tree, menora, or whatever else your family sticks their gifts under, here are some quick tips:

  • The more pixels you have in your pictures the more space they take up. In theory, a 6 Mega-Pixel image will take up about twice as much space as 3 MP one. While the 6MP one will give you much greater detail, the 3MP will give you an excellent photo at 4x6 or 5x7 - and even a good photo at 8x10 if you don't edit it much. If you reduce your camera's MP mode for most shots, you will be able to take a lot more pictures. One thing that I have done (incorrectly) in the past, is shoot at the higher resolution until I get short on space. The problem with this is that when you want to take a detailed closeup at the end, you may not have enough space. It's better to pace yourself throughout.

  • A 4x6 photo has an aspect ratio (the ration of width to height) of 1.5 (i.e. it is 50% wider than it is tall), but most digital camera modes have an aspect ratio of 1.33. This means that when you print a digital picture on a 4"x6" print, the picture will need to be cropped. Bear this in mind when taking shots so that you leave a little bit of space between people or items you want in your picture so that they are not flush against the top or bottom of your picture.

  • Some cameras (I know that my camera as well as other sony models) have a great feature that combines the first two tips - A 1.5 aspect ratio mode that shoots at reduced megapixel size. (Sony calls this 3:2 mode. On my camera its 4.5 MP, which gets me about 10-15% more pictures on a 512 MB card).

  • From the professional photographer to the snapshooter. Everyone knows that for every good photo you take there are plenty more that just don't come out right. Fortunately for you, in today's digital age, it is easier than ever to correct a lot of those mistakes in an otherwise good shot (like that great picture of the kids that would otherwise be ruined by little Jane's red eyes). To edit photos, I use Picasa from Google. It has a great set of simple to use tools for editing, organizing and sharing photos and it works well with most of the popular photo sharing services around the globe. One of my favorite features of Picasa is that it keeps your original photo intact and only stores information about edits you make. This way, if you screw up editing, it can easily get you back to the original.

Happy Holidays to one and all, and may all your pictures be good ones.

Thursday, December 15, 2005

Yonah's SQL Tip #5 - Naming Conventions

While there are millions of websites, magazine articles and even books that offer ways to improve your SQL code and make it more efficient, there are far fewer that provide insight on how to make your code more readable. One practice that was taught to me by a colleague several years ago, that I try to practice whever I go is the idea of the column name prefix. Essentially here is how it works - When I create a table, I prefix the field names with a 2-4 character mnemonic of the table name. For all fields that are specific to that table I use the same prefix. When I use fields from a different table (say, as foreign keys) I easily know which fields are local and which belong to other tables. Morover, when I write a query, the field names match exactly.

For example, let's say I have a products table that has a SKU, Size, and Color fields and an Orders table that has an Order ID, but also needs the sku information and QTY. Using this convention, my product table will have these fields:


and my Orders table will have these fields:


Because the quantity and the ID in the orders table belong to that table, they start with the ORD prefix. But because the PROD_SKU is borrowed from the Products table, the PROD prefix, indicates clearly that this is the SKU from the Products table.

Blogger Tools ... coming soon...

So I have finally found a few minutes to play with software. While my job does allow me to experiment a little bit with new technologies, for the most part, it's about developing business software. However, for the past couple of weeks, after the kids have gone to sleep, I have been experimenting. Since I moved my blogs from MovableType to Blogger, out of necessity, I developed a small tool for exporting my old blog entries to the new one. I am also playing with AJAX a little bit (more precisely it would be just JAX, because there is nothing Asynchronus about the XML files). In my experiements, I have developed a tool that uses the XML file and images generated by Picasa's 'Export to Web Page' feature and uses it to make a web slideshow of your images.

I hope to have these tools ready for prime-time in January and will post em' here for your review.

Tuesday, December 06, 2005

All this talk about AJAX is making me nostalgic.

I just read this following article on Eric Pascarello's blog:Eric's weblog - AJAX..Is It hype?. It seems that every development website you turn to is talking about AJAX. For those not in the know, AJAX is an acronym Asynchronus JavaScript and XML. The phrase was coined by Jesse James Garrett, to refer to a specific type of web application.

AJAX applications essentially minimize 'browser flash' by loading the UI into a single web page and then update the page behind the scenes obtaining XML data from the web. Google has developed a lot of these applications of late like G-Mail and Google Maps.

But while the term AJAX is new, the technologies used to fuel it are almost a decade old. 8 years ago, I was working for a small consulting firm and asked to develop a rich application for Dow Jones / Telerate. Flash was in its infancy, and used for not much more than pretty displays (most of the hooks for data had not come into the picture yet), and JavaScript was being used heavily as a client-side development tool. For dow, we developed a prototype called TAC - Telerate Active Channels. TAC was a very simple project to illustrate how a single web page could be opened, kept open a whole day, and still receive active content as the day wore on. One of our designers developed a single web page to use for the UI. Together we worked on the DHTML and Javascript Elements, and then I needed to do the hard part - get the data there. I basically wrote a small java-based client and server application to handle the data. The server would generate random news stories and stock/fund/index quotes from data in text files. The client, a Java Applet with no UI, was able to log on to the server, subscribe to a news feed, that would automatically update the UI via JavaScript. Sound familiar? While we didn't know it at the time, we were developing technologies very much akin to AJAX and her cousin - RSS. We did it all without any real APIs, tools, or 3rd parties.

While the prototype was far off from a real application, it piqued Dow Jones interest and led to follow-on work developing Offline (yes Offline!) browser-based trading applications (one for Commercial Paper, the other for ForEx). I know what you're thinking - why would anyone need an offline browser-based application? Well back in 1997, many companies didn't have broadband, and wanted to work offline before dialing up and syncing. With some of my colleagues, we developed a local browser app that could save data to an MS access database and then synchronize when connected. I wonder if anyone is doing that with AJAX? :)

Unfortunately, Bridge Information Systems bought Telerate from Dow Jones while the project was in development, and sadly, it never got launched. Maybe I will develop an AJAX app - if not for anything else, than for old times' sake :)

Sharp TM-150 Review

It is very easy to give a glowing review on a new product. Quite frankly, when you review a product for a week or two, you are reviewing as much of the functionality you've been able to learn, as well as the potential you see for that product in the long run. Reviewing a product after some time however, is a different story because you should know all of the features that you are going to use, and at the same time, the potential is gone - it has either been realized or not.

In this vane, I am reviewing the Sharp TM-150 after about 10 months of use. While it is a decent phone (I do still use it every day), I must say that I am dissapointed in it overall. In general, while the megapixel camera function is decent, I found this phone to be lacking in overall features such as PIM synchronization, battery life and connectivity.

The primary reason I bought this phone, of course, was because it was one of the first Megapixel camera phones on the market. As camera phones go, it is a much better camera than most of its peers. I took the picture on the left in Megapixel mode with the camera (using the external LCD as a viewfinder). On several ocassions where I didn't have my digital or traditional camera with me, I was able to take several photos with the camera phone and print 4x6 prints with them. I have one of them (the pic just to the left of my head in the picture to the left) that I have hanging in my cubicle at work, and people still don't believe I took it with my phone.

However, it is still not a substitute for a real camera. Your arm and the subject need to be perfectly still while shooting, or else there is a considerable amount of blur. But as for the other features, this phone falls short - I can't go more than a day or two without needing a charge. But worst of all, as someone who needs a PDA, but doesn't want to carry one, this phone was useless. I bought the data kit for it - thinking that it would sync outlook and charge my phone at the same time, but all it did was enable me to load contacts onto my phone. It seems that the only way to load calendar data is by entering it using the phone's keypad, rendering it totally useless. Because the phone is only tri-band and not quad band, it suffers to get service in some areas.

In any case, this review is moot, because TMO, for the most part, doesn't even sell this phone anymore. I hope that even if I do stay with T-Mobile, that I will be able to replace this with a phone that has bluetooth outloook sync, and a much better battery life - oh, and a camera. I guess that's the only good thing to come out of this phone - I'm Jaded.
Posted by Picasa

Monday, December 05, 2005

XBox bait and Switch

So apparently, there are a lot of folks out there that are trying to make a profit on the new XBOX 360. While the fact that 10% of all XBOXes sold in the first few weeks were somehow resold on eBay for a profit, I was surprised to see the linked article (click the title) that accused a local portland oregon store of a major retailer of playing bait and switch tactics.

The new XBOX seems so popular that even one of my favorite 'deals' sites - Ben's Bargains has opened his own site just to track XBOX inventory. Check it out at XBOX 360 Tracker.

Personally, I just don't understand the allure. Maybe because I am not a gamer. If I do indeed buy one (which, unless it is given to me, I won't), I would wait a few months to the point at least which it could be had for the sticker price - surely not a 150% premium!


What if your boss turned to you this week and asked you to write a simple search that looked up a contact in a database table by first or last name. On the surface it sounds easy. If your last name field was called LAST_NAME your query might look like this


Seems simple enough, but not without problems. Take my last name - Wolf - for example. Let's say you knew what my last name was, but you didn't know how it was spelled - so it could be Wolf, Wolfe, Wolff, Woolf, etc. The above query would require me to try every possible spelling to find a result. So maybe as a precaution, you would try to match the beginning of the word using like:

WHERE LAST_NAME like 'wolf%'

This would do a little better, but it has two major shortcomings - It will exclude names that should be included like Woolf, for example, and will include names that shouldn't like Wolfish or Wolfson. Of course, my last name is a limited example. If we were searching for, say, Smith, searching under
Smith will also return Smithson but not alternative spellings of Smith, such as Smythe.
Fortunately, there are two SQL functions that can help in this endeavour: SOUNDEX and DIFFERENCE.

SOUNDEX takes a string as an argument and returns a value that is based on the way that the string sounds (I am not 100% how this is caluculated, but it still seems to work). So re-writing our earlier query:


This query will give us the desired effect of matching similarly sounding names with different spellings without excluding most spellings and without including different names. DIFFERENCE takes this one step further. It essentially takes two strings as arguments and then returns the difference of their SOUNDEX values. The difference is between 0-4, with 4 being the closest match. Using difference we can re-write the query one more time, expanding our horizons to include not-so-exact matches, but also ranking them by closeness:


What this query does is it will find names that are not quite 'Wolf', but closely related and then rank them in order of closeness (i.e. it will match, Wolf, Wolfe, and Wolfson and they will be ranked in that order respectively).

So the next time you're in a matching bind, give DIFFERENCE and SOUNDEXa try.

Wednesday, November 30, 2005

Sometimes my predictions do come true...

The FCC’s ruling earlier this week to allow for the purchase of individual cable channels is the beginning of something that I have been predicting for a while, the unbundling of broadband access from the applications that run over it. This is the first step in basically allowing us, the consumers, a choice of broadband providers that is different and independent of the choice of the application and service providers that provide us services over the IP network.

To better illustrate this, imagine if you will, that your local gas company not only charged you for the gas provided to your home, but also dictated how high you could turn your oven, or which specific cycles you could use with your dryer. Sound crazy, no? But this is exactly what the phone and cable companies do – they charge you for both the delivery mechanism (either dial tone or cable service) and then dictate how you can use those services (in fact, it was only about 25 years ago, that the phone company began to even let you plug in your own phone into the wall jack).  So far, Voice over IP (VoIP) has shown us how we don’t need the phone company to provide us with phone service. In fact, I can even get cable and/or DSL service and cut out the local phone company altogether. As phone, cable, and software companies roll-out IPTV services, it is not easy to see how the cable and phone companies are at a crossroads.

Before long, as long as I have IP service at home, I can shop online for phone service, TV and video content, and a bunch of other things that we haven’t thought of yet.

Critics (read: cable companies and broadcasters) of the plan indicate that this will kill independent channels that will not have enough audience to stay afloat. As a consumer who has no need for, say, five discovery channels, I say: a) Why should I subsidize them? and b) Who cares? While I do think that this will kill some small independent channels, I don’t think it will do too much damage to the TV production industry – because on demand will change the way we think about TV. The reason we need channels right now is because we haven’t mass-marketed the technology to deliver on-demand TV and Video. We need five discovery channels because someone might want to watch 8 hours of The Crocodile Hunter while another wants to watch American Chopper. But with on-demand, there is no need for channels, so long as the video is on a server out there and my TV has a way of downloading it. Because programming an entire schedule isn’t necessary, there is no need for broadcasting one – all we need is a good search engine for video, and a credit card account (Oh, so that’s why Google created Google Video ().

Let’s hope that the FCC wins this one, and our living rooms finally catch up to our server rooms as far as technology is concerned.

Trying to re-start this once more


So after many false starts, I am trying to get my techology blog rolling again. Please bear with me as a port over my existing blog entries. I hope to actually get some Relevant posts up here soon.

Wednesday, September 21, 2005

How to handle server-specific configuration settings in C#.Net

One of the biggest hassles I've ever had with deployment so far has been trying to ensure that I preserve the correct global variables for my application when I deploy my code from development to staging to production. The primary example being connection strings to the database.

In doing some research, I discovered that .Net allows you to create customized sections in your web.config, and then I came across this article on ToDotNet -

The article explained how to create a custom XML handler and have it look at the URL to pick the correct connection strings.

Although it was written in VB.Net, it was very easy to port into C#, and I also added two small enhancements:

  • I added the ability to add a default section, just in case the server name doesn't match one of the other defined sections.
  • I also had it include the name of the selected configuration, so that you can export it for debugging purposes - i.e. in case you are not sure which configuration is selected.

Below is my C# code. If you want to implement the full solution, look at the above article:

using System;
using System.Web;
using System.Xml;
using System.Configuration;
using System.Collections;

namespace ujci_v3
/// Summary description for ConfigHandler.

public class ConfigHandler:Hashtable,IConfigurationSectionHandler
/// For now, an empty constructor

public ConfigHandler()

/// This implements the Create interface of IConfigurationSectionHandler,
/// needed to be able to retreive the config info
/// from web.config

public object Create(object parent,object configContext,XmlNode section)
//Get the host name
string server=HttpContext.Current.Request.Url.Host;
XmlNodeList NodeSettings;

//Find the configuration for this hostname

//If no config is found, use the default.
//Also, add the 'Config' key to tell us which configuration is being used.
foreach(XmlNode n in NodeSettings)
catch(Exception ex)
//If any error ocurrs, add the error message as part of the config, to help us debug.
return this;

Thursday, August 11, 2005

Views vs. Stored Procedures

Wow - I got to do these more often! I recently hired a new programmer, and we were having a discussion regarding the best way to handle complex queries involving lots of joins. Obviously, running one of those complex queries ad-hoc (even if it is somewhat optimized) will put a strain against your server. In certain situations, it definitely makes sense to use a 'pre-compiled' version of this query - but the question is, what kind?

You could create a stored proc that, when executed, returned the appropriate result set, or you could create a view that would allow you to query the joined tables as if they were actually one big happy table. While different schools of thought have evolved around this subject, I generally use the following rules of thumb to determine when it is appropriate to use either a Stored Proc or a View:

Use a stored proc if:

  • You need to obscure specific details of the query from the querier
  • You need to perform advanced logic or calculations against the data before returning it (beyond basic aggregation).
  • You will only be using the data in one or two specific places and will not need to further filter, sort, or modify it.

Use a view if:
  • You need the ability to further modify or manipulate the data - by specifying sorting or aggregates.
  • You will be using this in enough places that maintaining a stored proc will be cumbersome to customize and maintain for each place.
  • You want to select the specific output columns to be passed back to your application
  • You need to join the results of the data to another table or tables.

.Net Tip - Caching and Data Updates

Wow, it's been a while! I promise to update more frequently. To make up for not posting for a while here is a very useful caching tip that I use on my own sites.

We all have user controls that appear on every page (a perfect example of this - a header control or a navigation control). User controls can very easily be cached using the OutputCache directive:

<%@ OutputCache duration="3600"%>

By embedding the above tag into a user control, it will be cached for an hour. But what if the data it displays changes during the course of the hour? How do I ensure that the data will be updated in a timely manner? The simple solution is in the VaryByCustom attribute of the tag.

VaryByCustom calls a special function called GetVaryByCustomString and passes it the value you specify for the VaryByCustom attribute in the OutputCache tag. Essentially you write the code to determine what value gets returned. With each subsequent request for the cached object, the ASP.Net engine first checks the result GetVaryByCustomString function. If the result has changed - regardless of the duration on the Cache, the system will bypass the cache and execute the code in the control.

Therefore, it is very easy to set up, say, an application variable that gets changed only when the data gets changed, then use the GetVaryByCustomString function to check for updates.

So your OutputCache line will now look like this:

<%@ OutputCache duration="3600" VaryByCustom="updateCheck"%>

Then make the following changes to your Global.asax:
  1. Add the following line to your Application_Start method:


  2. Add the following code as the function GetVaryByCustomString:

    public override string GetVaryByCustomString(HttpContext context,string arg)
    return Application["serial"].ToString();
    //We need to return an empty string so that this compiles
    return "";

Now you are almost done. All you still need to do to ensure that your cached pages are updated when data changes is to make the following call:


This will update the application variable, and change the result of the GetVaryByCustomString function, which will force the Cache to update.

Wednesday, May 25, 2005

My server has a usb port on the front, should I be worried?

Recently at work we got a handful of new servers. In addition to all of the standard bells and whistles, and expected improvements in processing power, capacity and speed, these servers also had another new feature on their front panel - a USB port.

The intent of course is so that those Sys admins who carry around configuration files and other utils on usb keychain drives will be able to easily plug those drives in to update these servers.

Convenient as it might be, it really scares the crap out of me. Think for a minute about the following scenarios:

1. Someone plugs in a bootable usb drive and then hard-cycles the machine and manages to put a virus, spyware or other malware on the machine.

2. Someone plugs in a USB wireless adapter, and then has an accomplice connect to the server via wi-fi to steal information or put trojans on the machine

3. Someone walks in, plugs their digital music player into the server, downloads oodles of sensitive data, and then walks out, completely unsuspected.

While these might sound a bit far-fetched, but in reality why couldn't a saavy janitor pull something like this off?

Tuesday, April 19, 2005

Yonah's SQL Tip #3 - 2 tricks with CUBE and ROLLUP

Last week we talked about the CUBE and ROLLUP operators for SQL. Of course, they are very versatile, but are sometimes hard to work with. In this week's SQL Tip, we are going to focus on two things - providing custom group names for aggregates and altering the sort order so that aggregates come last.

First Let's Start with this query aginst the pubs database that ships with SQL server:

select type,title,avg(price)
from titles
group by type,title
with rollup

The results of course, will show the type, title, and average price for each entry in the titles table (of course avg is meaningless since each title only has one price, but you get the point). We will also see that for each ‘type’ there is one additional entry that has a NULL value in the title field. This entry is the average price for all of the titles of a specific type. Of course, the NULL is unsightly. It would be much more user-friendly, if the title field contained a useful phrase like Average for all titles or Average for all type titles where type is the specific type.

This is very easy to do, using the isNull() function:

select type,isNull(title,’Average for all ‘+type+’ titles’) as title,avg(price)
from titles
group by type,title
with rollup

With this small change we now have a pretty name for our ‘rolled-up’ field. However, we still have a problem with sorting – because if we sort by title – the Averages will come up close to the top of the list. So now we need a little trick to get them to sort properly. That’s where we can make use of a handy little case statement:

select type,isNull(title,'Average for all '+type+' titles') as title,avg(price),
case when title is null then 1 else 0 end as sort_order
from titles
group by type,title
with rollup
order by type,sort_order

This will effectively sort the totals for each type to appear below all of its titles. Of course, lets say we want to put all of the totals at the bottom, we can simply change the last line to read:

order by sort_order, type

By reversing the order of the columns, we now ensure that all specific titles appear first, before the type summaries.

Friday, April 08, 2005

Yonah's SQL Tip of the Week #2 - Using CUBE and ROLLUP

SQL has two great operators to let you produce 'pretty' reports from aggregate queries. Basically, when you perform an aggreate query like, SUM or AVG and use group by to aggregate the results, you can then use either the WITH CUBE or WITH ROLLUP to produce sum totals for all of your groups.

For example, let's say you have an orders table for an e-Bookstore that has the following three fields - Date,Title,Price_Paid - The first two are the date of purchase and the book title, and the third field is the actual price paid by the consumer. Since the site runs promotions all of the time, the price paid by each individual varies. You have been asked to provide a report that shows the average price paid title - simple enough, your query will probably look like this:

SELECT Title,AVG(price_paid)
Group by Title

Simple enough, no? So now management comes back and says, hey this is great, but we'd like to see it with monthly totals. So you re-write your query as follows (changes are bold):

SELECT Title,DateName(m,DATE)+' '+DateName(y,DATE) as Month,AVG(price_paid)
Group by Title,DateName(m,DATE)+' '+DateName(y,DATE)

Essentially the output from this query has two fields- the month and date and the average price paid per field. It is also important to note that the order in which you specify the group by fields does make a difference - if you keep the order above, you will notice that the records will first be sorted by title, and then by date. So all of the records for title 1 will be displayed in chronological order before the results for title 2. If you reverse the order of the arguments where date comes first in the group by you will notice that all of the records will be date ordered and then ordered by the titles. Although we could alter this by adding an Order by clause, it's nice to know that this is the default behavior. This ordering also comes in handy with the CUBE and ROLLUP operators.

For those of you not familiar the cube and rollup operators can be used in conjunction with the group by attribute to produce aggregates over a greater group. So, say for example we wanted our query to not only compute the average price paid per month, but also include the average price overall for a specific title. Or, the average price for all titles in a specific month? That's where cube and rollup come in.

Here is our query again, with one small modification:

SELECT Title,DateName(m,DATE)+' '+DateName(y,DATE) as Month,AVG(price_paid)
Group by Title,DateName(m,DATE)+' '+DateName(y,DATE)

Executing this new query now gives you almost the same results as the previous one, with one difference. At the very end of each title group, there is an extra row with 'NULL' for the month. This is the overall average price paid for the book. If we reverse the order of the grouping - putting the month column before the title one, we will instead now have a row with a 'NULL' title value, which is the average price for all titles for that month. Now let's make one more change to our query:

SELECT Title,DateName(m,DATE)+' '+DateName(y,DATE) as Month,AVG(price_paid)
Group by Title,DateName(m,DATE)+' '+DateName(y,DATE)

If you look at the results for this query, you will see that it combines the outputs of the two rollup queries. Essentially the general rule of thumb is that rollup provides aggregates for all group columns except for the last one, and cube provides aggregate for all group columns. So if we split up year and month into separate columns, and then used GROUP BY Year,Month,Title in our rollup query we would get rows representing:
  1. The average price paid per title in a given month and year
  2. The average price paid for a given year
  3. The average price paid for a specific month across all years.

If we changed it to cube, the only thing that would change is that we'd also have an average value per title for all months and years.

As you can see, these are powerful tools to add to your SQL repetoire - next week, we will provide three additional tips for making them work better.

Monday, March 28, 2005

Having fun with the Google APIs

A while back, I downloaded Google's APIs. Essentially they will provide access to their search database for those who want to try experimenting with their site and search tools.

Among the many ideas that I toyed with was the opportunity to find the rank of a specific site for a given keyword. In other words, let's say I sold Widgets - it would be pretty useful to find out on what page in Google's results my site is on when I search for the term Widgets.

So using the google apis, I managed to develop a tool that does just that. It simply iterates through the search results and tells you what number page your result shows up on.

Of course, the Google apis are limited to 1000 hits a day, which can get used up very fast. Instead, I am currently revamping this to search Google, Yahoo, MSN, and A9 without having to use the APIs. When I finish, I will be more than glad to post the code.

Another thing that I think will be useful - coming up with a unique enough term that will almost guarantee that your site will be found when looking for it. When I figure that one out, I will change my blog name, and then let you know :)

Yonah's SQL Tip of the Week #1 - Top and Sorting Views

I am sure that you've stumbled across the Top command. You might have seen it by looking at the SQL statement when browsing tables in Enterprise Manager. You may have seen it in the Syntax for SELECT. No matter what the Top command is a very useful command in more ways than one.

Essentially Top lives up to its name. The most Common syntax is:
SELECT TOP n * from table order by sales_count DESC

The query above will return the first n rows of the result set. This type of query can be useful in many situations. For example, when you are outputting the top ten selling products on your website. Notice that the ORDER BY clause is set. Order by essentially determines the order in which the rows are returned, and since we are filtering out the top 10, it also controls which records get displayed. Without Order By, Top would only return the first ten records in the result set.

Top also has another mode - percent. Instead of returning a fixed number of rows, percent returns the top n percent of a result set. This can be good if you want to find, say, the top ten percent of the most frequently sold items:

Select top 10 percent * from products, order by sales_count desc

The Percent mode will work with any percentage, up to an including 100 Percent. Why would you want to put top 100 percent in a query if not specifying top at all gives the same result? Well I could definitely think of one main reason for this - Creating a View with a Default Sort. In SQL, views cannot include an order by clause, unless the TOP keyword is specified. Obviously, this needs to be there in order to display the appropriate records. But let's say that you want to pre-sort an entire result set in a view without specifying ORDER BY in your query? Since you can't put an Order By clause into the without specifying TOP, simply put the following SQL into your view:

Create View myView as
Select TOP 100 percent from table, order by sales_count DESC

Now you have a view that is pre-sorted.

Yonah's SQL Tip of the Week - Introduction

So one of the new features I want for this blog is a weekly SQL tip. I have worked and chatted with hundreds of developers over the years, and while even the most junior of my colleagues have great experience with the basic SQL commands, very few have mastered even a handful of the less-commonly used functions and features.

I can't begin to tell you how many e-mails and message board posts I've seen where people have created intricate SQL statements where, had they known about some of these features, they could have used only one or two extra words in their query instead of lines.

That's why I am putting these solutions into a weekly tip.

Wednesday, March 23, 2005

Teaching a man to Fish

As someone who likes to contribute to user forums and mailing lists, when I provide a solution, I generally avoid including the code. I might include the Pseudo-code, or suggest how one might code it, but I try to stop just short of the actual code. Why? Becuase I believe that if we just go around posting code snippets everywhere, we will encourage bad coding practices and enable people to import snippets into their programs without even thinking about the ramifications to the system.

By making Joe/Jane Forum Poster devise the code out of a concept, he or she is able to write code that best suits their needs, without importing someone's half-assed attempt at solving a problem that they have limited knowledge of.

I am also limiting my liability of course, as most of the time the code I post - while it will work - isn't necessarily the cleanest, most efficient or safest code - but is really there by example.

This of course, is different from my articles, because I need to write the code to illustrate the concept. The biggest difference there however, is in that I choose the topics, and then Joe or Jane can apply it to their own code needs.

Out of nowhere...

Today I got an e-mail from a complete stranger with a technical question. The question was related to a Java project I had worked on in 2002 - almost 3 years ago. I believe that he got my information from the Java Forum as I was working on developing a framework for sending ringtones and icons via SMS.

The only reason that his e-mail reached me is that I updated my profile on that website. I guess you need to be careful what you write on the 'net, because at some point someone might ask you a question about it.

Monday, March 21, 2005

We're chaging things around

So after debating and debating about what to do with this site, I have decided to devote it to the thing I have been doing on message boards and mailing lists for years - commenting on the technology that I use in my day to day life.

I hope that whoever out there in the Blogsphere is reading this will find the information useful and informative, and please note that I will gladly answer questions about any thing I post. Just send the questions to 'blog at bonerosity dot com'.


Monday, January 31, 2005

Wow, I got to update more often...

Hey everyone, I know its been a while, and I still need to put together my wishlists. I hope to be updating more in the future. I am also getting a new camera phone with a megapixel camera, so hopefully I will be able to review it, as well as use it for Random pictures on the blog.

Tuesday, January 11, 2005

The Battle for the MP3 lowlands...

I have been unabashed about my desire for an MP3 player - both on my blog as well as with my wife. If last year's big story was about the iPod locking up the high-end Digital Audio Player market, the story of 05' (as evidenced by CES and Macworld) is going to be about the low end. Make no mistakes, Steve Jobs doesn't want to make the same mistakes he made in the 1980s by trying to attract some of the low-end market.

So Apple has introduced the minimalist chic iPod Shuffle. While I think that this will go over great with the Jogging and low-budget fashion concious individuals, I am not sure this will help him tie-up the low-end MP3 player market. While the 1GB iPod shuffle is probably the most affordable 1GB mp3 player on the market @ $149, for $50 more (and probably less than that after some dilution) I can get a Creative Zen Micro 1GB? Or a Rio Carbon 2.5GB with 2.5 times the capacity? Oh, and both of those devices have real navigation and an LCD display.

The Zen Micro seemed to be just as hot as the iPods this holiday season, with most retailers running out of stock. Zen Micro now introduced 3 new capacities for the Micro - 1,4,and 6 GB join five, and interestingly enough, there is only about a $20-30 price difference between them. (Which means that the least popular sizes will be on sale soon :) ). They also introduced the Zen Micro Photo with 5 and 6GB capacities, and at $300-350, they are a bargain when compared to the iPod photo at $499. Granted, the iPod photo starts at 40GB, but my laptop hard drive isn't even that big.

Ah yes, it's going to get interesting.

Wednesday, January 05, 2005

Waiting for CES

I decided to hold off my 05' wishlists and predictions until after CES. This will give me ample time to decide what I want from the cream of the crop.

My wife and I were sitting on the train today next to someone with a 4G iPod, and I was staring at it. My wife just looked at me and sighed.

Maybe apple will introduce a flash-based iPod that even she can't say no to?