Monday, November 26, 2007

Phishing is not an "externality"

I'm no security expert, not even close (I just read about it), while Bruce Schneier is really world renowned security expert. I'm an avid reader of his monthly newsletter and, far more importantly, Neil Stephenson thanked him in Cryptonomicon which is ummmm... words fail me but let's say awesome. However there is one particular hypothesis of Bruce Schneier that I never bought into, not even a little bit; the "our customers are victims of phishing but it isn't affecting us" hypothesis of phishing as externality. In this article (and several other places) he claimed that "Financial institutions have no incentive to reduce those costs of identity theft because they don't bear them." Again, I'm no security expert but I never agreed with that sentiment; it seems obvious to me that customers leaving financial institutions for phishing problems is a direct cost even if financial institutions are unaware of it or are ignoring it (it's an entirely different problem if that's the case.)

This new study indicates that financial institutions do indeed bear costs of phishing and what's more, phishing seems to affect them at their core: by jeopardizing trust people have in their brands. I don't know how many times I have bought an item from Amazon.com even if it is more expensive just to avoid giving my data to an unknown merchant. That's the power of brand. If the study is correct (and it does need to be confirmed by more studies) then I think "phishing is externality" hypothesis can be safely rejected (most importantly by companies that adhere to it through ignorance or bad managment.)

Tuesday, November 06, 2007

ApexSQL Log 2005.10 released + API

The big news this week is that we have released ApexSQL Log 2005.10 together with ApexSQL Log API 2005.10. Yup, API is out there for all you people that have expressed interest for programmable transaction log reading API over the past couple of years. But let's start with ApexSQL Log.

There are three major enhancements in this release of ApexSQL Log:
1. Support for ApexSQL Log API. These two applications share the same server-side components right from the start so you can run them in parallel on the same server by design.
2. Improvements of UPDATE reconstruction process. Due to the way SQL Server logs UPDATE operation, their auditing is Achilles' heel when auditing with transaction logs. However, in this new version we have again improved this process managing to extract more data than ever. It still not infallible (and it will never be infallible unless SQL Server's way of logging UPDATE operations is changed) but it's *very* good indeed.
3. Support for online transaction log reading on Vista x64 and, much more importantly, on upcoming Windows Server 2008 (x64 and IA64 but more on that below)

Here are two enhancements that we didn't deem as major since they are experimental:
1. Experimental support for Itanium (IA64) platforms for SQL Server 2005 IA64 and SQL Server 2000 64-bit.
2. Experimental support for SQL Server 2008 on all platforms (x86, x64 and IA64.) This includes support for new data types (DATE, DATETIME2, DATETIMEOFFSET and TIME)

Yes, as you can see we can actually add support for Itanium and SQL Server 2008 and not call it a major feature simply because they are experimental. For comparison try finding another transaction log reading application that supports even SQL Server 2005 on x64.

What does "experimental support" means? It means that it works (and it all really does work) but that we don't support it officially which in turn means you get support *anyway* and as always we try to fix problems ASAP *anyway* but you understand that this support hasn't been as thoroughly tested as with our other platforms.

Now let's move to ApexSQL Log API. API exposes DML auditing features of ApexSQL Log. Everything ApexSQL Log has in this regard (reading of online/detached/backup transaction logs, filtering, old/new table ID mapping, etc.) is exposed in API and it works just like it does in ApexSQL Log. So what's missing? Missing are:
1. Recovery Wizard: if you need to recover from a data loss (deleted data without transaction log, truncated and dropped tables, corrupted MDF files) you will need to grab ApexSQL Log.
2. DDL auditing. In this initial version at least we are exposing only DML auditing.
3. Out-of-box exports into XML, CSV and so on. All these can be built by using API so we didn't include them. We are evaluating publishing export classes using API just to demo the technology.
4. Command Line Interface and GUI. You would need to build those but it can be done with API.

I'll post more soon on the way API is used. Regarding licensing and related stuff (like distribution) I would recommend that you consult here.

From now on I'll be writing a bit more hopefully (would I bet on it you say?! well... what odds are you giving me ;) There are several parallel projects that I'm involved with but that I can't discuss right now. Suffice to say that ApexSQL Log (and API) will be getting some pretty cool stuff in ApexSQL Log 2008 release and the same goes for some other products of ours (and one completely new one...)

Monday, July 30, 2007

Fast forward 2 months

I see that it's been more than 2 months since the last time I posted. But things have been moving in fast lane and just this last Friday we have release ApexSQL Log API to QA - hope you see it in our offer soon. Release of ApexSQL Log API will be accompanied with the release of ApexSQL Log 2005.10 which has also been release to QA. Both share the same set of server-side components but more on that in a dedicated post.

What have I missed blogging about? Well, apart SQL Server/development stuff, there is the obvious 38th anniversary from the first moon landing (and 8th anniversary of my arrival to Chile - they fall on the same day! how geek is that??) Then not so obvious - 100 years since the birth of Robert Anson Heinlein.

Oh, and I'm blogging this on Miami airport waiting for my connection flight to Raleigh - and from there to our HQ in Chapel Hill. Among other things I have an unsettled debt there... No, I haven't prepared for it. I haven't played basketball since that faithful day last year... But I'm counting on inspiration... or something... anything!

More on Log stuff and development soon, I (kinda) promise.

Sunday, May 27, 2007

reCAPTHCA: fight spam and help digitize books

I just read this article on Ars Technica. I won't go into details here but it's very cool so check it out. And if you have time take a look at this video - it's a lecture by Luis von Ahn (he's mentioned in the article) on harnessing the power of human computation (sounds scary but it's not :)

Wednesday, May 23, 2007

ZX Spectrum - 25th anniversary

This is what I wanted to blog about today but I'll leave it for some other time now.

Rah

My dearest of all cats, Rah, has died today after a long fight with illness (induced by several poisonings he had over the past year.) Here's a photo from last year taken on the day I let all my cats roam free for the first time:



He was my friend, my companion, my pride and joy. I loved him dearly.

Thursday, May 17, 2007

Building Boost 1.34 for x86, x64 and IA64

The new version of Boost library has been released recently. Since we have released ApexSQL Log 2005.04 just the other day, we can now migrate our source base to Boost 1.34. This post will deal with building of Boost static libraries for side-by-side compilation and linking of x86, x64 and IA64 libraries. This is now much easier than it was in Boost 1.33 but there is still some work to be done.

1. Download Boost and Boost.Jam from here.
2. Put Boost.Jam binary into a directory on your PATH.
3. Uncompress Boost library into a directory of your choice.
4. There is an error in one of Boost build files that needs to be fixed before compiling for IA64 architecture:
  1. Go to boost_1_34_0\tools\build\v2\tools folder.
  2. Open msvc.jam in your favorite editor (mine is by *far* Visual Studio itself)
  3. Replace all instances of "x86_IPF" with "x86_ia64" (there should be two)
There is also an omission in the same file and native x64 compiler will be used only on boxes with %PROCESS_IDENTIFIER% matching AMD64. So if you have Intel based x64 CPU build will be done with x86-x64 cross-compiler. It's just slower but the results are the same. If it bothers you, you should be able to fix it easily.

5. Go to command prompt, change directory to boost_1_34 and build Boost library for all three architectures.
  • For x86:
bjam msvc architecture=x86 stage
  • For x64:
bjam msvc architecture=x86 address-model=64 stage
  • For IA64:
bjam msvc architecture=ia64 stage

6. Install x86 libraries:

bjam msvc architecture=x86 install

By default Boost will install its include and lib files into C:\Boost\lib and C:\Boost\include\boost-1_34. The problem with this is that if want to install x64 or IA64 static library files these will overwrite x86 static library files already installed. To avoid this we need to move all x86 static libraries to C:\Boost\lib\x86 (this is my solution - obviously other solutions are possible) so that these aren't overwritten by subsequent installations.

7. Install x64 libraries:

bjam msvc architecture=x86 address-model=64 install

Again, move these libraries from C:\Boost\lib to C:\Boost\lib\x64.

8. Install IA64 libraries:

bjam msvc architecture=architecture=ia64 install

Now move these libraries C:\Boost\lib to C:\Boost\lib\ia64.

9. Adapt your Visual C++ projects so that:
  • x86 platform links libraries from C:\Boost\lib\x86
  • x64 platform links libraries from C:\Boost\lib\x64
  • ia64 platform links libraries from C:\Boost\lib\ia64
That's it! You should now be ready to use Boost libraries for binaries on all three Windows platforms.

Update: Fixed a typo in "For IA64" build statement.

Wednesday, May 16, 2007

ApexSQL Log 2005.04 released

Yesterday we have finally released 2005.04 version of ApexSQL Log. You can download it here.

Here's the full list of enhancements, changes and fixes:

----------------------------------------------------

RELEASE 2005.04.0453
DATE: 15 May 2007
DESCRIPTION: Medium Enhancement/Fix release
----------------------------------------------------

Enhancements:

- Greatly improved reconstruction of UPDATE operations (MAJOR ENHANCEMENT)
- Greatly improved memory footprint and performance scaling (MAJOR ENHANCEMENT)
- Added support for recovery of VARCHAR(MAX), NVARCHAR(MAX) and VARBINARY(MAX) data types from
database files (MAJOR ENHANCEMENT)
- Improved support for transaction log backups converted from 3rd party backups (MAJOR ENHANCEMENT)
- Added partial reconstruction of changes made to fixed-length fields in UPDATE operations.
- Added partial support for recovery of XML data type from database files.
- Added support for reading SQL Server 7/2000 transaction logs under SQL Server 2005.
- Added support for reading SQL Server 2005 transaction logs under SQL Server 7/2000.
- Added more recovery reports details to recovery scripts.
- Added dummy data to BLOBs that are partially recovered due to lack of data.
- Added "row partially reconstructed" column to SQL, BULK, CSV and XML exports.
- Added "/run_small" switch to command line interface which forces the application not to save
most of intermediate files on the hard drive. If there is less than 5% free space on the drive
the application will switch automatically to "run small" mode.
- Added recovery script path selection to Recovery Wizard.
- Added table mapping to MDF data recovery.
- Improved generated recovery scripts for less-than-perfect data recovery scenarios.
- Improved progress bar during transaction log reading.
- Improved reliability of BLOB recovery algorithm.
- Improved auditing performance in general.
- Improved memory managment when under memory pressure.
- Improved formatting for REAL and DOUBLE SQL types.
- Improved diagnostic logging.
- Integrated client-side and server-side setups into one setup.

Changes:

- Increased drive usage for intermediate and temporary files during auditing/recovery, to approximately
10-20% of transaction log file sizes. If there is less than 5% free space on the drive the application
will switch automatically to "run small" mode thereby saving on drive space.
- Limited export options to exporting every 10th REDO/UNDO script during the evaluation period.
- Limited copy to clipboard for all REDO/UNDO scripts during the evaluation period.

Fixes:

- A set of problems with server-side components running on SQL Server run by non-administrator account (MAJOR FIX)
- A problem with relying on @@SERVERNAME for server name on repeated connections which
blocked re-connections for servers not accessible through their @@SERVERNAME (MAJOR FIX)
- A problem with UPDATETEXT and line breaks in BLOB recovery (MAJOR FIX)
- A problem with double reading of log files during open (MAJOR FIX)
- A problem with recovery of some system table structures (MAJOR FIX)
- A problem with primary key values in SQL export (MAJOR FIX)
- A problem with XML data type recovery (MAJOR FIX)
- A problem with some recovery options not working correctly on remote servers (MAJOR FIX)
- A problem with putting server logs to system32 (or SysWOW64) directories. Now server logs are all in LOG
directory of SQL Server instance.
- A problem with recovery of dropped tables in SQL Server 2000.
- A problem with bad IDENTITY status for some reconstructed tables on SQL 2005.
- A problem with grid refresh if the last transaction was rolled back.
- A problem with clustered index keys not shown for un-reconstructed UPDATE operations.
- A problem with in-row values in NTEXT fields not correctly shown.
- A problem with recovery of BLOB data of NTEXT columns.
- A problem with NVARCHAR(MAX) columns in recovered table structures.
- A problem with some field values not being exported on UPDATE operations.
- A problem with application crashing when trying to write to a read-only file.
- A problem with "File not found" exception when doing recovery from online database files.
- A problem with extended procedure connecting back to SQL Server and being rejected due to lack of permissions.
- A problem with SQL authentication not working correctly when accessing logs from server tree.
- A problem with SQL authentication if invalid user/password entered.
- A problem with a label on SQL authentication user/password window.
- A problem with relying on @@SERVERNAME to start Connection Monitor.
- A problem with temporary UNDO/REDO scripts not having SQL extension.
- A problem with table structure lines not ending with consistent UNICODE line endings.
- A problem with reaching EOF of online database files.
- A problem with declaration of FLOAT data type in generated scripts.
- A problem with duplicate column names in some recovered table structures.
- A problem with duplicate recovered tables.
- A problem with lingering IDENTITY_INSERT state after a generated INSERT with identity fails.
- A problem with inserting NULL field values into NON-NULL columns in some recovery scenarios.
- A problem with application always demanding valid file paths in sysfiles.
- A problem with recovery of tables with timestamp columns in some recovery scenarios.
- A problem with duplicate servers in "Server Activation Center" dialog.
- A rare problem with BLOB data updates being out of range for the current BLOB values.
- A rare problem with not loading all operations in some circumstances.
- A rare problem with command line interface not creating log files correctly.
- A rare connectivity problem on SQL Server 2005.
- A rare problem with activation not working correctly.
- A rare problem with activations on multiple instances of SQL Server 2005.
- A rare problem with NULL values crashing the application.
- A very rare problem with calls to some empty extended stored procedures hanging on SQL 2005.
- A very rare problem with Connection Monitor not resetting on consecutive errors.
- A very rare problem with ApexSQL Server Helper driver sometimes crashing on SMP machines.

As you can see this is a huge release. I have blogged about some aspects of it before here and here.

Monday, April 30, 2007

ApexSQL Log 2005.04, part II: Smoother user experience

This is the 2nd part of my "ApexSQL Log 2005.04" series which I started here. In the first one I blogged about some of the most important new features and fixes, this time I'm going to mention most important things we did to improve user experience (especially for new users)

Integrated client-side and server-side setups into one setup

Starting with version 2005.03 we have provided a standalone server-side setup for our customers. However, this hasn't been convenient enough so we went one step further and we will now provide one unified setup for client and server-side components.

There are now three setup options:
1. Client application (GUI and CLI) and server-side components on a local server
2. Client application (GUI and CLI)
3. Server-side components on a local server

The only thing worth noting here is that setup can install server-side components only on a local server (this includes virtual servers in failover cluster) and not on a remote server.


Problems with SQL Server run by non-administrator account

In previous versions of the software our server-side components needed high level of privileges which of course led to problems in environments with restricted privileges of the account running SQL Server service. In version 2005.04 we took this problem head on and have significantly lowered the level of privileges needed for server-side components. In the process we have also solved another of problems springing from inability of the account running SQL Server to back-connect to the server itself. There is one known issue left here: Connection Monitor still needs login privileges for the account. In future versions we will allow manual configuration of Connection Monitor connection parameters.

We also had problems with logging on server-side components when lacking privileges. In previous versions server-side logs were stored in System32 (or SysWOW64) folder but in some configurations the account running SQL Server service lacks privileges for writing into system folders. Version 2005.04 stores all server-side logs into "LOG" subdirectory of SQL Server (which, in retrospect, is the ideal place for log files!)

For the record, I think that restricting privileges to SQL Server service (and other services) to bare minimum is a great security practice and one that we certainly try to encourage.

Problems with bad @@SERVERNAME

ApexSQL Log uses @@SERVERNAME to identify server's real name in some situations. But this leads to problems if you change server's network name after SQL Server was already installed. When that happens @@SERVERNAME continues to return the old server name even after the service restart so ApexSQL Log doesn't have access to real machine name which in turn leads to all sorts of problems. This is surprisingly (for me) common situation and we often had to help users fix @@SERVERNAME values. Starting with version 2005.04 this has been fixed. We still use @@SERVERNAME for some internal stuff but connection is now always done through server name as input by user. This also solved the problem with accessing servers available only through IP address or listening on a port other than 1433.

We also solved the same problem but on server-side with Connection Monitor. Connection Monitor has to back connect to SQL Server and used to obtain its server name from @@SERVERNAME. It doesn't any more - it now always connects to 127.0.0.1 with instance name if available retrieved directly from command line that ran SQL Server.

In case you are wondering, to fix @@SERVERNAME so that it returns the correct machine name you can do the following:
1. Execute the following script

sp_dropserver ''
go
sp_addserver '', local
go

2. Restart SQL Server service

Saturday, April 28, 2007

ApexSQL Log 2005.04, part I

As I announced back in March ApexSQL Log 2005.04 is now in QA. It took more time that I thought back then but for a good reason: we went back to our keyboards and did more damage on new issues reported by some customers - especially scaling for very large transaction log files (50 Gb and greater.) In any case 2005.04 version is in QA right now, it's looking great and I hope it will be out soon. In the meantime I'm going to do a small series of posts on improvements that 2005.04 brings starting right now.

Reconstruction of UPDATE operations

Here's the problem of UPDATE reconstructions in a nutshell:
1. In general case, when logging an UPDATE statement, SQL Server just logs what was changed and into what.
2. These before/after state can correspond to everything from sub-field parts to cross-field parts of a row.

So from 1 and 2 comes the following definition of the problem:

To reconstruct fully what happened on a field level in an UPDATE statement one needs to know the state of the row in which the UPDATE statement occurred.

The difficult part of UPDATE reconstruction is finding that original state of the row. We have greatly improved this in version 2005.04 and I will blog more on this later on together with 2005.03/2005.04 examples.

Memory footprint and performance scaling

Storage is getting cheaper, processing power abounds, bandwidths are improving and this all leads to larger databases and higher transactions counts. Both of these increments - larger databases lead to larger MDF files which we use in our recovery process and more transactions lead to larger transaction logs - are beginning to weight on our "last year's" technology. Hence we felt it necessary to redesign the way memory is used by the application in order to allow greater scaling. We also wanted to improve the user experience with the application playing nicely with system resources (well, with memory and disk space at least - as with most other applications we want as much CPU and I/O as we can get.) In this kind of software there is a constant tension between just how much memory we should use (so that we don't have to re-read a lot - which slows things down of course) vs. how does our memory and I/O usage affect the ability of the system on which it is running and its own ability to successfully finish auditing. All of these things (better scaling in memory and performance, playing nicely with the rest of the applications) we have improved greatly in 2005.04. I plan to blog in detail on this soon.

Support for transaction log backups converted from 3rd party backups

Use of 3rd party backup tools is getting more frequent and some of the people using 3rd party backups are also our customers (or want to become one.) However, there is a problem with some 3rd party backups and their converters to MTF (Microsoft Tape Format) files as the converted MTF files do not always match files that SQL Server would have produced. This used to confuse ApexSQL Log but starting with version 2005.04 the application handles correctly these inconsistencies and will now read backups that doesn't perfectly match SQL Server's backups.

Support for reading transaction logs of SQL Server 7/2000 under SQL Server 2005 and vice versa

Format of transaction log changed significantly between SQL Server 2000 and 2005. In ApexSQL Log versions prior to 2005.04 it was not possible to read SQL Server 7/2000 transaction logs when connecting to SQL Server 2005 server or vice versa. However, as migration toward SQL Server 2005 is accelerating (which I believe from anecdotal evidence) there is more and more need for auditing of old transaction logs on newly migrated servers. This situation happens in two instances:
1. When migration is done by detaching the db files from old SQL Server and then attaching them to SQL Server 2005. In this case old transactions are still in the transaction log file but in the format of the previous version of SQL Server.
2. When the need arises to audit old transaction log backups (or detached transaction log files) and the new server is all that we have left available.
Starting with SQL Server 2005.04 we handle both of these cases seamlessly. We also handle reading of SQL Server 2005 transaction logs on SQL Server 7/2000 - just in case anyone ever needs that.

Auditing progress in GUI

With the current version of the software users can't really tell how much more will they have to wait before the results come in. This is , especially for large data sets that we are processing, and I'm sorry we never got around to fixing this prior to 2005.04. But the good news is - it's fixed and I think that audit progress bar is now informative and helpful. The progress is split into two parts:
1. First 50% of the progress are dedicated to initial processing of the transaction log sources we are auditing. Here's a typical shot:


2. Second 50% of the progress are dedicated to filtering of transaction log sources according to the parameters set by the user. However, even when the there is no progress in the number of matching entries, the time in log that is currently being analyzed is shown. Here's a typical shot:

This makes it easy to understand what's going on, where the application is and just how much more (approximately) there is to go.

I think that's it for today. But this isn't all - I'll blog more on 2005.04 soon.

Friday, April 27, 2007

What's the difference between database version and database compatibility level?

Paul Randal, over at SQL Server Storage Engine blog, discusses in his post the difference between database version and database compatibility level. When working with ApexSQL Log this difference is important since any db on SQL Server 2005, even with compatibility level of 70 (SQL Server 7) or 80 (SQL Server 2000), still has the same structure of system tables as a db with the level of 90 (SQL Server 2005). This matters in three cases:
1. When we are doing DDL analysis/recovery since we have to look for changes in different tables under SQL Server 7/2000 and SQL Server 2005 due to complete redesign of system tables in the latter version.
2. When users are directly auditing changes made to system tables (which is sometimes necessary.)
3. When auditing transaction log backups from one version of SQL Server on another. In ApexSQL Log 2005.03 transaction logs from SQL Server 7/2000 could not be read at SQL Server 2005 or vice versa. With the upcoming 2005.04 version this will be done seamlessly.

Of all of these cases only the 3rd is really problematic since we depend on SQL Server to provide us with meta-data for all the tables including system tables. So when we see an operation on say "sysobjects" table from a SQL Server 2000 transaction log attached to SQL Server 2005, we aren't able to reconstruct it since SQL Server 2005 lacks meta-data for "sysobjects" table. This case is very rare but even so we will try to solve it after 2005.04 by building in meta-data for system tables of all three versions of SQL Server that we support.

Btw, in case anyone is wondering how you can audit SQL Server 2005 transaction logs on SQL Server 7/2000 database (considering that it cannot be attached), it can be done by auditing transaction log backups or detached transaction logs.

Thursday, April 19, 2007

Update

I have updated Links section with some Software Development and SQL Server links. I have also added Tools and C++ sections with some links. I'll be adding more of these in the future.

Thursday, April 12, 2007

Neighborhood poisonings

In other related news to my cat Estrella poisoning, three weeks ago I adopted my first puppy ever.... which has also been poisoned and has been in the vet clinic since Monday. He hasn't eaten any solid food since then and it is still unclear when will he be able to come back home. These poisonings are just the latest in a long string of animal poisonings in my neighborhood. Last year another one of my cats died from it along with at least two more neighborhood cats and one dog. I managed save Rah, my tomcat, even though he was already in his death throws when I found him by pure chance (I managed to take him to the clinic just in time.) Since then he had another poisoning (another poison) and is now suffering from chronic renal failure. This year we had another neighborhood cat die just last month. And now this... This time I have located between 6 and 8 food baits with poison, all one or two houses from mine. Couple of months ago I have also found bait inside of my own garden. So far we have no idea who's doing the poisoning.

Ok, that's it for today's bad news. Next up - something I *wanted* to blog about today.

Estrella

One of my cats died last night due to poisoning. Estrella ("Star" in Spanish) came to live with me a bit over 3 years ago. She was a good momma cat and a loving companion. She will be remembered as a warm, sweet, dear nuisance whenever she wanted to be petted (which was often.) Here she is, two days after her kittens were born:

Estrella, you will be greatly missed. There is a hole in our lives where you used to be and the sadness in our hearts. We will not forget you. I hope that you have found your "Door into Summer".

Friday, March 30, 2007

VMWare Converter

Two days ago my notebook hard disk drive started failing... or at least the system seems to have stopped the disk (while I was away) and after I reset the system S.M.A.R.T. message appeared warning me of imminent disk failure and urging me to backup the data immediately. I have all my correspondence on that notebook but that's not the problem - the problem is that I have Office there and IM there and Skype and that I didn't (and still don't) want to pollute my development PCs with all those applications. While I was researching my options I stumbled upon VMWare Converter which turned out to be just what the doctor ordered: I used the application to convert the physical PC to a virtual OS image on my main development PC. The conversion process took almost 48 hours to finish (I had to restart the conversion process at one point due to network router reset) but it was worth it - I now have my notebook's image running on the development PC and it's working really great.

So VMWare Converter is pretty cool product as it is but here's my wish list for it:
1. Allow virtual machine to physical machine conversions! This is crucial. With this the circle would be complete allowing in my case for example to transfer back my notebook OS image to new hard disk drive once it's installed. It can be done (google for "VMWare V2P" without the quotes or see here) but not directly from the software and using 3rd party tools only (for hard disk image transfer)
2. Extend the software so that it can used as an incremental backup tool. Instead of backing up parts of hard disk, one would backup the entire system as a virtual OS image and VMWare would in future backups correctly recognize and backup just the changed parts. Then if the original PC say gets broken, lost, stolen, whatever one could be up and running the same image (as it was backed up) immediately.
3. Allow selection of parts of hard disks to be backed up. There are folders that contain huge amounts of data that doesn't really matter or is backed up somewhere else and which takes precious time during the conversion process.
4. In the case of network failure, conversion process is not reset but resumed once network comes up again (this was the only major annoyance but it was really *major* and it could have been avoided rather easily I think)
5. Allow VMWare Converter to be licensed on its own. Right now there is the free version and the Enterprise version which cannot be licensed on its own but one has to licenses who-knows-what to get it.

Once I get my notebook back I may dare and try Virtual 2 Physical (V2P) conversion. If I do, I'll post the results here. Somehow I have a feeling that it won't be as nearly as smooth as Physical 2 Virtual (P2V) conversion.

Wednesday, March 28, 2007

Turn AUTO_SHRINK off!!

Paul Randal (Principal Lead Program Manager, SQL Storage Engine) over at SQL Server Storage Engine blog, has posted today on why everybody should turn AUTO_SHRINK off for their production dbs. He enumerates three reasons why AUTO_SHRINK should be turned off. He's an authority on the SQL Server Storage Engine so I would heed what he says.

I have however one more reason to add: in case of catastrophic data loss (either through db corruption, DELETE without WHERE, DROP TABLE or TRUNCATE TABLE) from which you can't recover by restoring a backup (either because you don't have it or you don't have up to date transaction log backups or whatever), you really really *really* don't want SQL Server going in and shrinking the database files before you had a chance to recover the data. What you want to do instead is:
1. Put database in read-only mode immediately so that it's left in the state as close as possible to the state it was in in the moment of the data loss (if you experienced a hardware failure or something such your db files are now detached - just leave them like that for now)
2. Download ApexSQL Log and install it. If you need to analyze a database that's online install ApexSQL Log's server-side components on the server. It doesn't matter if you have or don't have transaction logs for the database - the application will try to recover data from what you have (besides, transaction logs can help only with the recovery of delete data)
3. Run ApexSQL Log's Recovery Wizard and chose the recovery option most adequate for the scenario. Recovery Wizard will recover all the data (including BLOB) it can still find in the database and create a recovery script.
4. Run recovery script on another database to check the data. If everything is fine - great! If there's a problem or you think that the software should have recovered more data, please contact us at support@apexsql.com and we will help you out.

Friday, March 23, 2007

On Technical Support

At ApexSQL we strive to give the best possible technical support we can. Brian (my boss) already blogged on this in Support Engineer Creed. I'm writing this to give my thanks to all those people in all these years that have put their trust in us. It gives me great joy to be daily talking to you, our customers, and I love being able to help you (and I feel bad when I can't.) Thank you.

Thursday, March 22, 2007

Chileno!

Hoy recibĂ­ el comunicado oficial de mi ciudanania Chilena. Las palabras no me bastan para expresar la felcidad y la gratitud que siento.

Danas sam primio zvanicno obavestenje o mom Cileanskom drzavljanstvu. Reci mi nisu dovoljne da izdrazim srecu i zahvalnost koju osecam.

Today I have received the official notice of my Chilean citizenship. Words fail me to express the happiness and the gratitude I feel.

Monday, March 05, 2007

Upcoming ApexSQL Log 2005.04

We are in the final stages of releasing ApexSQL Log 2005.04 to our QA team. In the previous versions the bulk of improvements were in 64-bit support and in recovery process while auditing has sort of taken a backseat. But this upcoming version will bring some very serious improvements on the auditing end, especially with the infamous (at least in our little world of transaction log auditing) UPDATE reconstruction problem (which has been a very exciting problem to solve, in a geekish sort of way of course :) I invite those that weren't very happy with UPDATE auditing in the past to check it out once the new version is out. Beside this we have further improved BLOB recovery process (adding full support for new VAR*(MAX) types in SQL Server 2005) and we generally improved reliability, performance and memory footprint. All in all, I hope this new version will be a real boon for our customers.

Digital Blasphemy

As I mentioned couple of weeks ago I switched to two 1,600 x 1,200 screens. I can't say enough good things about that - it's just marvelous for productivity. However, with so much screen real estate the bland default background of Windows 2003 Server gets boring very quickly. So I set about to search for alternatives. I have known about Digital Blasphemy site for a long time and while I like the images very much it never attracted me so much as to subscribe to it - my screen was always too cluttered with too many windows to appreciate a great looking desktop background. This time however it was different and with a lot of great images provided for 3,200 x 1,200 I was hooked. I subscribed for a year and if it works out good for me I'll go with lifetime subscription next year.

Check out the site for yourself and see if you like it. All the art is created by just one guy, Ryan Bliss, and he's behind the site as well. You can check more about him here.

Wednesday, February 14, 2007

Going multi-monitor

Yesterday I have received my DELL 2007FP monitors! Here's a shot:


So far it has been a joy to work with and I'm really happy I have chosen two 20" 1600x1200 screens over one 24" 1920x1200 one. This is my primary development machine and it feels great to run VS2005 on one screen with Management Studio, Firefox, IE7, VMWare and MSDN Library running on the other!

First thing I did though was to install a trial of UltraMon. It's a great utility and unless something catastrophic happens with it I will buy it soon. The only thing so far that I wish it would do and it doesn't is show Alt+Tab window on both monitors, not just on the primary.

I also needed a new keyboard so I ran out and bought Microsoft Natural Ergonomic Keyboard 4000 (I sure hope they don't keep adding new adjectives to their product names.) I love Microsoft Natural keyboards and I have been using them happily for over 7 years now. I've been through three of them so far; my keyboards do heavy mileage (although one is still alive on another PC). On the other hand, I'm still using the good old Microsoft Intellisense Optical mouse - it's over 6 years old now and it still works great (although I'll admit that I've been eyeing a new one.)

In the background you can see my faithful HP notebook. It's almost two years old now so I will need to upgrade it soon. It runs Outlook, IM, Office and it collects general clutter. I develop on it only rarely.