Sign up on the Revelation Software website to have access to the most current content, and to be able to ask questions and get answers from the Revelation community

Universal Driver Discussion. (Networking Products)

At 16 FEB 2004 12:46:18PM Bruce Cameron wrote:

I have a few questions and concerns regarding the UD (which is very exciting) and wanted to start a discussion.

First, I wanted to get some clarification of things mentioned on the products page.

1.

"Who should purchase the Universal Driver 3.0?

There are a few groups that fit this category. If you are running Netware with IP or want to, then you will need to run this with your Revelation-based applications. If you have your Revelation-based applications running from a Linux server, this is also for you. Running your Revelation-based applications from a Windows Server? You guessed it - this product is for you."

Did you leave out "Everyone could and should!" ? I'm not sure I understand "There are a few groups that fit this category."

Sounds like the majority of groups are considered.

This is not an easy question to ask as it may be obvious to others

your joking and I am clueless?!

2.

"Protection from data corruption.

Data corruption can occur for a variety of reasons, including bad network cabling, power fluctuations, hardware errors, no record locking, or users aborting processes. Often it is impossible to predict or prevent such events. The Revelation Universal Driver protects you against data corruption when such events do occur by recognizing that an error occurred and notifying the user, while protecting the data table from incomplete transactions, virtually eliminating data corruption errors (Group Format Errors)."

This is great. How does this work with "no record locking and users aborting processes"? I am curious from a programming standpoint.

This would seem to add greater benefit and integrity to MFS programming.

3.

"Increased speed performance.

The Revelation Network Products are designed to enable database activity to be off-loaded onto the file server. This architecture dramatically reduces the amount of network requests. The reduction of network requests translates to increased performance. Benchmarks show a dramatic performance increase on local area networks (LANs), and an even greater increase on wide area networks(WANs)."

Lots of questions on this one. "designed to enable database activity to be off-loaded onto the file server." Is this an option?

Since today's commercial file servers (Novell and Windows) do not use a time-slice clock (that I know of), how is the load distributed on a for example a 200 user OI system with 10 people running reporting or processing on several files 2gig+?

I ask this question because after reading this it would seem to me that I could actually eliminate dedicated index(ers)with this driver in todays world with hardware speeds what they are. And with that in mind maybe you are headed in that direction with this. If so, is there a target date? Are there also any plans for 'mirroring' rev databases as an option so that if server-A goes down server-B kicks in and the UD handles the transfer?

4.

"There is only one REVPARAM file no matter how many different subdirectories you have with .LK and .OV files. "

I love this but am curious how it knows if I use (c: local lists) for example and when doing support on the fly and have tables that I did not have a revparam in before.

5.

"There is now support for files larger than 4 gigabytes.

Support for very large frame sizes."

These two concern me because I was asked by a client that saw this and said "well is that why we have had corruption on our largest files"? My immediate response was 'not necessarily, as many factors can contribute to data corruption, this statement means that we could have tables larger than 4gig but the LHservice did not 'support' them in that it could not guarantee that they would be corrupt free once they got that big.' Is that a load of crock? And since I don't have the docs. on the LHService in hand, what are the limitations of the services below UD? We have several clients with tables larger than 4 gig. There just seems to be potential for some angry folks.

6.

"The LHVerify facility is integrated onto the server side."

Hooray!!

Well, that's a start. I am hoping that others with chime in. I personally think the UD is a great advancement for the product line and can't wait to get our clients on this asap but would like to understand some more about it.


At 16 FEB 2004 01:13PM The Sprezzatura Group wrote:

Bruce, as it is President's Day you might not get an official answer today so we'll add our 2 cents worth.

1. "Who should purchase the Universal Driver 3.0?"

As this is primarily a marketing document it is likely that it was cut and pasted from a previous document. Anybody buying new would buy Universal. Anybody wanting Linux Server would buy Universal. Anybody running on Novell and wishing to upgrade to use IP would use buy Universal. So it should be considered the driver of choice.

2.This is great. How does this work with "no record locking and users aborting processes"? I am curious from a programming standpoint. This would seem to add greater benefit and integrity to MFS programming.

As with other Services you are protected from data corruption by using the UD. As the UD is the only thing doing reading and writing it makes no difference whether the programmer locks or not. HOWEVER this will NOT protect you from logical corruption or from having your changes overwritten by somebody else's update. Similarly if the user aborts processes the update will either be actioned by the UD or not actioned. So no corruption occurs. But partial updates remain partial updates. The benefit here is that when it is the responsibility of the work station to perform an update, if the work station crashes it could do so mid way through a file redistribution and corrupt the file. If the work station crashes now this is no longer the case.

3."designed to enable database activity to be off-loaded onto the file server." Is this an option?

The UD is the same as other services in this respect. File updates take place at the server not at the workstation. When you write to an LH file it is not just a single operation. It may need to increase the modulo of the file, it may need to relocate the row in the frame list, it may need to do any one of a number of things. A single write request could easily generate 10 i/o requests. With the UD and other services these additional 10 take place at the server not at the workstation.

I ask this question because after reading this it would seem to me that I could actually eliminate dedicated index(ers)with this driver in todays world with hardware speeds what they are. Dedicated indexers still remove locking contention. Implementing direct updates at the service level is something that has been discussed and may yet see the light of day. Are there also any plans for 'mirroring' rev databases as an option so that if server-A goes down server-B kicks in and the UD handles the transfer?. It would not be appropriate of us to discuss works in progress or in planning but you are not the only person to view this feature as desirable (especially in an enterprise context) and we would be very surprised if such a feature did not make it into the product.

am curious how it knows if I use

The single RevParam is used to provide default behaviour for all files accessed from the server - not locally.

5…. have tables larger than 4gig but the LHservice did not 'support' them in that it could not guarantee that they would be corrupt free once they got that big./i] It was previously physically impossible to have a file with an LK or OV portion greater than 4GB as this was not supported due to addressing limitations. With the UD the header structure has changed and this is now possible. The Sprezzatura Group World Leaders in all things RevSoft </QUOTE> —- === At 16 FEB 2004 03:21PM Bruce Cameron wrote: === <QUOTE>]]Bruce, as it is President's Day you might not get an official answer today so we'll add our 2 cents worth.]As with other Services you are protected from data corruption by using the UD. As the UD is the only thing doing reading and writing it makes no difference whether the programmer locks or not.] if the work station crashes it could do so mid way through a file redistribution and corrupt the file. If the work station crashes now this is no longer the case.«" Well this is interesting. At a couple of our sites, while connected through PCAnywhere, while running a task such as index update or lhverify the workstation will seem to lose its connection to the server but the server looks like it is still processing and we have to abort the process and usually do it through ARev. With this new protocol, what happens when the WS goes down, comes back up? Does the UD track that and somehow resume or is what you are saying is that it works like a UPS to perform some 'orderly shutdown' tasks to the file, frames and group(s) to maintain the integrity of the forward and backward links in the frame headers as well as end of item and end of frame markers. I guess this is two questions, one dealing with tables that are being written to and another where functions are running that take awhile but do not update. (i.e. LHVERIFY) Thanks for the reply. Again, I am really excited about the direction of the OI products and the breaking or merging of fileservers and true multiuser, multithreaded operations. </QUOTE> —- === At 16 FEB 2004 05:12PM The Sprezzatura Group wrote: === <QUOTE>We should still use locking within code to ensure that others cannot update that which we are trying to. The UD will protect from physical not logical corruption. We always use a Locker(file, Row, Msg) type construct. To grossly simplify - imagine writing a row is a case of stacking three bricks on top of each other. Without a UD the workstation is responsible for stacking those bricks and if the workstation is interrupted after stacking only two you have a corruption. With a UD the workstation says to the UD "write this". If the workstation powers down in the middle of sending the UD will get an incomplete message and ignore it. It it sends it successfully the UD will stack the three bricks. So nothing the workstation can do can interrupt the UD stacking the bricks. Now if the SERVER abends you could still get corruption but that will be the least of your worries. Workstations are MUCH more prone to user error than servers so it is ghood to take the responsibility off the workstation. Plus of course this reduces network traffic - the server no longer needs to send the three bricks to the workstation for it to stack. It just accepts the request and does the stacking itself. The Sprezzatura Group World Leaders in all things RevSoft </QUOTE> —- === At 16 FEB 2004 08:28PM Bruce Cameron wrote: === <QUOTE>Oh absolutely, I think we are both saying the same thing, maybe me a little rhetorically. Coming from a multi-user based background (supermicros) I've always been puzzled by the practicalitly of pc-based file server applications for multiple users. Unfortunately the diskless PC or thin client workstation never took off but at least the Citrix model adapted by Terminal Services are steps in the right direction in recent times. Which may be another discussion on how a TS session vs. a physical workstation connection differs to OI/UD. I understand what your saying… I would much rather use an NCOPY to move or copy files around on a Server so that the server doesn't 'serve up' then back down. So how do you/Rev and people feel about READU and READVU and the other things in the last post? </QUOTE> —- === At 17 FEB 2004 02:41AM The Sprezzatura Group wrote: === <QUOTE>Personally (Andrew not Sprezz) I never liked ReadU and WriteU - they're not sufficiently granular for me. However they were implemented in AREV and seem not to have made it to OI. I also dislike ReadV and WriteV - they were implemented for Pick compatibility and are inefficient - they still perform a READ/WRITE of the entire record. So it goes without saying that WriteVU etc… The Sprezzatura Group World Leaders in all things RevSoft </QUOTE> —- === At 18 FEB 2004 11:28AM Bruce Cameron wrote: === <QUOTE>Fair enough. I never liked (in Pick or ARev) readv or writev; to much overhead, never use them myself either. However, having the system (OI) handle a lock during a read in one command, whats not to like? For one, it makes better code for this community. I have seen plenty of arev code and OI code that has no locking and should. Thats not to say that a readu would cure that, but it means more people can drive an automatic than a standard. In addition, it would be compatible with Pick code and with the addition of options within OI as mentioned, it would be far superior. </QUOTE> —- === At 18 FEB 2004 11:48AM The Sprezzatura Group wrote: === <QUOTE>Agreed - one of the things we are asking forin a future OI release is the ability to preprocess compilation ourselves, that way we could swap a readU for our own locker calls, add pick compatibility etc etc - even Object orientation… The Sprezzatura Group World Leaders in all things RevSoft </QUOTE> —- === At 18 FEB 2004 04:26PM Pat McNerthney wrote: === <QUOTE>A possible good way to use READV and WRITEV would be to implement the operation in the filing system and at the server for service based installations. Then you would actually get the benefit of only transmitting over the network the value being operated on, plus updates would automatically maintain internal record integrity. Pat </QUOTE> View this thread on the forum...