Friday, June 13, 2008

Nations 2008


    We would like to invite you to join us for the most fun & exciting flair bartending event of the year is coming to Las Vegas this July 27th, 28th & 29th!!! 
The NATIONS International Flair Challenge will be held at the Green Valley Ranch Resort Spa & Casino.  This flair-only event will be set-up for three days of fun for each and every competitor.  A Level 4 event on the 2008 FBA Pro Tour Americas as well as a major stop on the 2008 FBA Advanced Tour, NATIONS will be giving away almost $40,000 in cash & prizes in three divisions, Professional, Advanced & Amateur, with $15,000 Cash going to the top Pro. 
NATIONS sponsors will once again be Skyy Vodka, Midori Melon Liqueur, X-Rated Fusion, Finest Call Brand Premium Mixes, DeKuyper Cordials, Barproducts.com & Flairco.com
With relaxed rules, NATIONS is going to be set up for fun.  There will be a pool party for all competitors and specials for competitors throughout the duration of the event.  Great rates on rooms and plenty of fun for all involved.  This is going to be the best event of the year.
RULES & GUIDELINES
All NATIONS rules, drink information and format is now posted on the official NATIONS website, www.nationsflairchallenge.com.
REGISTRATION IS NOW OPEN!
Registration for NATIONS 2008 is NOW OPEN on the FBA website. You can register and pay for your entry fee at: http://nations.barflair.org. Do not delay as competitors have until June 23rd to take advantage of the Early Bird entry fee.  Sign up before June 23rd and entry fees are only $250, $150 & $75 for Pro, Advanced & Amateur, respectively.
COMPLETE ENTRY FEE SCHEDULE IS AS FOLLOWS:
____________________________________________________________________
PRO (Limited to 30 spots)
�    $250 - UNTIL June23rd
�    $350 - AFTER June 23rd, UNTIL July21st - REGISTRATION CLOSES
�    $450 - AT EVENT
____________________________________________________________________
ADVANCED (Limited to 30 spots)
�    $150 - UNTIL June23rd
�    $250 - AFTER June 23rd, UNTIL July21st - REGISTRATION CLOSES
�    $325 - AT EVENT
____________________________________________________________________
AMATEUR (Limited to 30 spots)
�    $75 - UNTIL June23rd
�    $150 - AFTER June 23rd, UNTIL July21st - REGISTRATION CLOSES
�    $200 - AT EVENT
____________________________________________________________________
____________________________________________________________________
PRIZE BREAKDOWN
PRO*
*Top 9 Professional Compete at Finals
Champion $15,000.00
2nd $5000.00
3rd $2500.00
4th $2000.00
5th $2000.00
6th $2000.00
7th $1500.00
8th $1500.00
9th $1500.00
Pro Total  $33,000.00
____________________________________________________________________
ADVANCED**
**Top 5 Advanced Compete at Finals
Champion $1000.00
2nd $500.00
3rd $250.00
4th $125.00
5th $125.00
ADVANCED Total $2000.00
____________________________________________________________________
AMATEUR***
***Top Amateur Performs at Finals
Champion Trophy
2nd Trophy
3rd Trophy
____________________________________________________________________
OTHER
1st Place Finest Call Stall $500.00
2nd Place Finest Call Stall Trophy
Feel free to shoot us an email if you have any questions or concerns.  We are looking to forward to the biggest and best event of the FBA Pro Tour Year!!!
See you in Vegas!!





Reblog this post [with Zemanta]

Euro 2008

What’s Euro 2008 and why it exists?
It's a simple application for managing the UEFA Euro 2008 Austria-Switzerland championship. With this program you can view the match schedule, save the final score, show both groups and second stage table, build and export simple statistics and, optionally, you can also update the results on-line.

Software license
Euro 2008 is donation ware: freeware with optional contribution.
What’s this means?
Euro 2006 is freeware because you can use it for free. Unlike shareware application that are limited in trial period and functions, with Euro 2008 you get a fully functional and free application (with online update service included).
Ok, but what’s about the optional contribution?
Euro 2008 exists in four version (MacOS8/9, Mac OS X, Windows, Linux), each version is available in two languages (IT,EN), it has icons and “Read Me” files customized for the various platform. This means that software development has required much time and resources.
Thus, if you like this software, feel free to send a little contribution, a comment and the version you’re using at the postal address showed at the end of this document or by using the "Donate..." button inside the application. I’ll really appreciate this. Thank you very much.

Main features
- Complete matches calendar and graphic visualization of groups and 2° stage
- Online update of the matches results
- Support for the conversion of the matches dates and time based on your local time zone and your system format. You can insert directly your time zone difference with the Austria-Switzerland or you can compute it automatically via Internet, simply by selecting your geographical area.
- “Copy” function to every section of the application (you can “Paste” data directly into a spread sheet or word processor).
- The statistics data are exportable in both text and HTML format.
- Proxy support (even with basic authentication method)

Euro 2008 FAQ
Q: This software don’t calculate automatically the teams rank in each groups. Why?
A: The automatic computation, besides being complex to handle for the drawing method applied in case of complete parity, could limit the software usage for the “try to modify results and look what happened on 2° stage table” approach.

System requirements
Macintosh:
Every Macintosh with Mac OS 9.0 or better includes Mac OS X
Windows:
Windows 98 or better
Gnu/Linux:
i386 platform (requires libstdc++.so.6, tested on Fedora Core 8 and 9)

Versions History
Version 0.9a1 - 20/5/2006
- First internal version based on Germany 2006
Version 1.0 - 28/5/2006
- First public version

Acknowledgments
Many thanks to all users who contributed with past version of this software

Disclaimer
This software may be freely distributed, but always in its unmodified form and together with this document. It may not be sold or resold, or bundled with any other commercial product. You may, however, include the software on a CD-ROM or floppy collection including the original package in its entirety.
You expressly acknowledge and agree that use of the software is at your sole risk. The software and the related documentation are provided “as is” and without warranty of any kind, express or implied.

Technorati Tags: ,,




Reblog this post [with Zemanta]

Thursday, June 12, 2008

Carpool

For those who do not already know, carpool or carpooling is when you combine with others to make a trip (by car). What does this imply. To pay less and to help the environment by going in the same direction with people you do not know. At travelsmart I understood better what is all about. Basically it is a database where you sign up and say where you are going, what time and how many available seats  are in the car and a number of contact.The trip costs are divided equally to all participants. If I want to go on holiday and I want to do carpool, I access the site and see what options I have.
Denis Barrow, a good friend of mine, the one who created the site, wants to give the code to everybody for free to make the carpool well known. Do you think that this is a good idea? And how can it be promoted?
Waiting for your answers.





Reblog this post [with Zemanta]

Wednesday, May 7, 2008

The robots.txt file

Although the robots.txt file is a very important file if you want to have a good ranking on search engines, many web sites don't offer this file.

If your web site doesn't have a robots.txt file yet, read on to learn how to create one. If you already have a robots.txt file, read our tips to make sure that it doesn't contain errors.

What is robots.txt?

When a search engine crawler comes to your site, it will look for a special file on your site. That file is called robots.txt and it tells the search engine spider, which web pages of your site should be indexed and which web pages should be ignored.

The robots.txt file is a simple text file (no HTML), that must be placed in your root directory, for example:

    http://www.example.com/robots.txt

How do I create a robots.txt file?

As mentioned above, the robots.txt file is a simple text file. Open a simple text editor to create it. The content of a robots.txt file consists of so-called "records".

A record contains the information for a special search engine. Each record consists of two fields: the user agent line and one or more Disallow lines. Here's an example:

User-agent: googlebot
Disallow: /cgi-bin/

This robots.txt file would allow the "googlebot", which is the search engine spider of Google, to retrieve every page from your site except for files from the "cgi-bin" directory. All files in the "cgi-bin" directory will be
ignored by googlebot.

The Disallow command works like a wildcard. If you enter

User-agent: googlebot
Disallow: /support

both "/support-desk/index.html" and "/support/index.html" as well as all other files in the "support" directory would not be indexed by search engines.

If you leave the Disallow line blank, you're telling the search engine that all files may be indexed. In any case, you must enter a Disallow line for every User-agent record.

If you want to give all search engine spiders the same rights, use the following robots.txt content:

User-agent: *
Disallow: /cgi-bin/

    Where can I find user agent names?

    You can find user agent names in your log files by checking for requests to robots.txt. Most often, all search engine spiders should be given the same rights. in that case, use "User-agent: *" as mentioned above.

    Things you should avoid

    If you don't format your robots.txt file properly, some or all files of your web site might not get indexed by search engines. To avoid this, do the following:

    1. Don't use comments in the robots.txt file
      Although comments are allowed in a robots.txt file, they might confuse some search engine spiders.
      "Disallow: support # Don't index the support directory" might be misinterepreted as "Disallow: support#Don't index the support directory".
    2. Don't use white space at the beginning of a line. For example, don't write

      placeholder User-agent: *
      place Disallow: /support

      but

      User-agent: *
      Disallow: /support


    3. Don't change the order of the commands. If your robots.txt file should work, don't mix it up. Don't write

      Disallow: /support
      User-agent: *

      but

      User-agent: *
      Disallow: /support

    4. Don't use more than one directory in a Disallow line. Do not use the following

      User-agent: *
      Disallow: /support /cgi-bin/ /images/

      Search engine spiders cannot understand that format. The correct syntax for this is

      User-agent: *
      Disallow: /support
      Disallow: /cgi-bin/
      Disallow: /images/


    5. Be sure to use the right case. The file names on your server are case sensitve. If the name of your directory is "Support", don't write "support" in the robots.txt file.
    6. Don't list all files. If you want a search engine spider to ignore all files in a special directory, you don't have to list all files. For example:

      User-agent: *
      Disallow: /support/orders.html
      Disallow: /support/technical.html
      Disallow: /support/helpdesk.html
      Disallow: /support/index.html

      You can replace this with

      User-agent: *
      Disallow: /support

    7. There is no "Allow" command.
      Don't use an "Allow" command in your robots.txt file. Only mention files and directories that you don't want to be indexed. All other files will be indexed automatically if they are linked on your site.

    Tips and tricks:

    1. How to allow all search engine spiders to index all files

      Use the following content for your robots.txt file if you want to allow all search engine spiders to index all files of your web site:

      User-agent: *
      Disallow:

    2. How to disallow all spiders to index any file

      If you don't want search engines to index any file of your web site, use the following:

      User-agent: *
      Disallow: /

    3. Where to find more complex examples.

      If you want to see more complex examples, of robots.txt files, view the robots.txt files of big web sites:

    Your web site should have a proper robots.txt file if you want to have good rankings on search engines. Only if search engines know what to do with your pages, they can give you a good ranking.

    Monday, April 14, 2008

    The right search engine optimization(seo) strategy

    The right strategy is crucial to the success of your search engine optimization activities.

    Search engine users are some of the most qualified and motivated visitors to your web site you will ever have. After all, they have taken the initiative to hunt for online resources on a certain topic. And then they clicked your link to learn more.

    However, getting listed in a search engine doesn't do you much good if you're number 415 of 1,259,000 search results. Surprisingly, it doesn't even help you much if you're result number 11. Most search engines display 10 results on the first page, and relatively few searchers click through to look at the second page.

    Make sure your web site will get the best possible search engine rankings and you'll get as many visitors as possible. High rankings contribute greatly to the success of your Internet business.

    The key to high search engine rankings is to do the right things in the right order:

    1. Find the right keywords for your web site.
    2. Optimize your web pages for these keywords so that they can get high search engine rankings.
    3. Submit your web pages to all important search engines and directories so that web surfers can find you.
    4. Get links from other web sites and make sure that these links contain your keywords.
    5. Track the results.

    Thursday, April 10, 2008

    The importance of valid HTML code

    Many webmasters overlook a very important aspect of web site promotion: the validity of the HTML code.

    What is valid HTML code?

      Most web pages are written in HTML. As for every language, HTML has its own grammar, vocabulary and syntax, and every document written in HTML is supposed to follow these rules.

      Like any language, HTML is constantly changing. As HTML has become a relative complex language, it's very easy to make mistakes. HTML code that is not following the official rules is called invalid HTML code.

    Why is valid HTML code important?

      Search engines have to parse the HTML code of your web site to find the relevant content. If your HTML code contains errors, search engines might not be able to find everything on the page.

      Search engine crawler programs obey the HTML standard. They only can index your web site if it is compliant to the HTML standard. If there's a mistake in your web page code, they might stop crawling your web site and they might lose what they've collected so far because of the error.

      Although most major search engines can deal with minor errors in HTML code, a single missing bracket in your HTML code can be the reason if your web page cannot be found in search engines.

      If you don't close some tags properly, or if some important tags are missing, search engines might ignore the complete content of that page.

    How can you check the validity of your HTML code?

      Connect to an official HTML validator that will check the code of your web pages.

    Although not all HTML errors will cause problems for your search engine rankings, some of them can keep web spiders from indexing your web pages.

    Valid HTML code makes it easier for search engine spiders to index your site so you should make sure that at least the biggest mistakes in your HTML code are corrected.

    Technorati Tags: ,,,,,

    Wednesday, April 9, 2008

    Search engine spiders and your web site

    Are you sure that search engines understand your web site? Search engines see your web pages with different eyes than web surfers.

    A web page that looks great to the human eye can be totally meaningless to search engines. For example, search engines cannot read the text on the images of your web site, and many don't understand web languages such as JavaScript or CSS.

    If you have a great looking web site that is meaningless to search engines, you won't be able to achieve high search engine rankings with that web site - no matter how good and interesting your web site content is.

    In general, search engines cannot see content that is presented in the following file formats:

    • images (GIF, JPEG, PNG, etc.)
    • Flash movies, flash banners, etc.
    • JavaScript and other script languages
    • other multimedia file formats

    Some search engines can index some of these file formats but in general, it's very difficult to obtain high search engine rankings if your main web site content is presented only in these formats.

    Search engines need text to index your web site. They cannot know what's written on your GIF or JPEG images or in your Flash movies. If you use a lot of images on your web site, you should also create some web pages that contain a lot of text.

    If you want to find out how search engines see your web site, you have to use a search engine spider simulator tool. A search engine spider simulator tool emulates the software programs search engines use to index your web site. They show you what elements of your web site are visible to search engines.

    That allows you to quickly find out whether your web site lacks information that search engines need to properly index your web site.