Converting Advanced Revelation Applications to the SQL Server Bond
Published By | Date | Version | Knowledge Level | Keywords |
---|---|---|---|---|
Revelation Technologies | 23 APR 1992 | 2.1X | EXPERT | SQL, SQLSERVER, BOND, MS_SQLC, PERFORMANCE, SECURITY, EXEC_TRANS_SQL, RBASIC, R/BASIC, WINDOW, CONVERSION, POPUPS, REPORTS |
In August of 1990, Revelation Technologies introduced the SQL Server Bond for Advanced Revelation Version 2.0. This created an opportunity for the Advanced Revelation developer to take advantage of the benefits of client-server computing by incorporating the Microsoft SQL Server in an Advanced Revelation application. This bulletin describes points to consider when converting an existing Advanced Revelation application to take advantage of the benefits of the SQL Server.
Background Required
This bulletin is not intended as a substitute for either the Advanced Revelation, SQL Server Bond, or Microsoft SQL Server manuals. It is intended as a supplement to these manuals based on experience with an actual conversion effort.
Before attempting to convert an application, you should be very familiar with it. Otherwise, you'll spend a lot of time studying the approach of the original developer, especially in the R/BASIC code. If this is the case, you may wish to use the original application as a working prototype and rewrite the application for the optimum approach using the bond. This bulletin will help with understanding what is involved in order to make that decision. However, it will not address a rewrite, only a conversion.
You will also need to have studied the Microsoft SQL Server. Ideally, you should have undertaken the Microsoft University courses on "Database Administration" and "Transact-SQL" or the equivalent self study.
Reason for Migrating
The primary motive for converting is usually performance. How well your converted application runs is dependent on the degree to which processes are moved from the front-end (Advanced Revelation) to the back-end (the server). In fact, most of the conversion effort is directed toward achieving this.
The SQL Server Bond
The SQL Server is a complete database management system on its own. To communicate with the server, you must issue commands to it in its "native" language, Transact-SQL. This is true not only for retrieval commands (e.g. SELECT * FROM AUTHORS) but for each individual I/O request as well. For example, to read a particular record, you must issue a Transact-SQL statement that returns only one row (e.g. SELECT * FROM AUTHORS WHERE AU_ID = "100-11-9999").
While this sounds daunting, it is the entire purpose of the bond to make Transact-SQL transparent to you. The bond identified internally as MSSQL_BFS supports I/O operations such as READ, WRITE, etc., by creating Transact-SQL scripts and passing them to the SQL Server via the Microsoft DB library (DBLIB).
What is significant about the SQL Server bond is that the back-end is an intelligent database server as opposed to a file server. It is the database server capability of the SQL Server that the conversion effort must focus on in order to get real application benefits.
There are essentially two ways of using the SQL Server bond. One way is to allow the bond to create Transact-SQL statements automatically for each I/O request you make. This is the default means of using the server, the one that the bond uses if you simply pass it READ, WRITE, and similar statements from R/BASIC. Because there is some overhead involved in allowing the bond to create the Transact-SQL statements on-the-fly, you may find it more efficient under some circumstances to go right to the server.
You can get some of the benefits of the SQL Server when you execute interactive SQL statements at the SQL window or embedded SQL statements from within R/BASIC programs. After some manipulation, these SQL statements are passed to the server for direct execution there. There is a bit of overhead in both the interpretation of these SQL statements as well as in using the results of the statements, but an Advanced Revelation SQL statement executed against server files will be much more efficient than an equivalent command such as an R/LIST command.
Direct Access to the Server
Another method of accessing the server is to create your own dynamic or static Transact-SQL statements and hand these to the server. This is a very powerful aspect of the bond, one that is extremely fast and enables you to access the server more-or-less directly from within R/BASIC programs.
The API to the SQL Server is a subroutine available from within R/BASIC called MSSQL_C. It is written in C and linked to Advanced Revelation using the Advanced Revelation C Interface. This routine incorporates 52 DBLIB functions and macros and the logic to return result sets to Advanced Revelation in an Advanced Revelation dynamic array. The results set can be up to 64K in one shot. Result sets larger than 64K can be returned in multiple large chunks.
Also available is a subroutine called EXEC_TRANS_SQL that allows direct access to the SQL Server in R/BASIC, but is much simpler to use and has the added benefit of using the existing connection to the server. Each connection takes about 10K. However, this routine is limited to 64K of results. That is still quite a bit since the data is being returned as an Advanced Revelation-delimited array.
Both of these subroutines are easy to use and return results extremely quickly. Even so, creating your own Transact-SQL statements is not always necessary, and it is likely that you may wish to use a combination of direct calls with higher-level R/BASIC READ and WRITE commands in the same application.
Planning the Conversion
Advanced Revelation offers tremendous flexibility in the approaches you can take to convert an application from file server-based to a client-server-based. There are three primary benefits of converting an application to the SQL Server:
- Interoperability.
- Performance.
- Security.
Each of these must be considered together with the functional requirements of the application to determine the best conversion approach.
The most important decision is that of scaleability vs. performance. If the application needs to be fully scaleable, you must focus on the use of the high-level tools (windows, popups, etc.) and non-specific API procedural calls and operations. The application would then be able to run if the data is on other environments bonded to Advanced Revelation such as Oracle and DB2.
If the application needs maximum performance, your approach must center around the use of the high-level tools, but in addition use specific API procedural calls and operations. For best performance with the SQL Server, some R/BASIC code with Transact-SQL calls is recommended. Although this would make it harder to scale the application to other platforms, it would still be fully compatible with any platform running Sybase.
Before converting an Advanced Revelation application to the SQL Server, you will also need to make a decision about interoperability. This refers to having the data on a central server with access from multiple front-ends. For example, it is possible that the data in your application could also be used by spreadsheets, other non-Advanced Revelation database applications, etc. Does the functional requirement of the application demand this type of interoperability?
If the answer is yes, you must plan to normalize the data on the SQL Server. This would mean abolishing multivalued fields. If the answer is no, or that only part of the data needs to be interoperable, you can normalize only the relevant portions of the data. In that case, not all of data would have to be on the SQL Server. However, for most applications we recommend that all of the data be on the SQL Server, whether you choose to fully normalize all of the data or not. This is primarily due to the need for joins which should not occur across platforms for maximum performance.
Performance is often the most important consideration. You gain speed from the SQL Server by using it to perform processes that would otherwise be done by Advanced Revelation across the network. Obvious examples include searching tables and updating multiple rows in one operation. As noted earlier, to get the best performance, you will want to put as much of the work as possible on the server; the more you can put on the server, the better the application will perform.
Security is gained by centralizing the data and allowing only a single point of access. The bond will fully utilize the security on the SQL Server.
Analyzing the Changes Required
Once you have determined these functional requirements for the application, planning an approach consists primarily of analyzing the impact of file structure changes to the application. You should consider the following questions:
What areas are particularly performance sensitive? Windows? Reports?
- These will be areas of concentration when optimizing input/output.
- Look for processes that can become stored procedures on the SQL Server.
- Identify and focus on the really tough lookups or queries.
Where are the Advanced Revelation indexes?
- These indexes will not be used for data that resides on the SQL Server.
- Relational index fields will not be necessary in the data files because the SQL Server can do joins at the server to achieve similar results.
What Advanced Revelation-specific filing system calls are made in the application?
- Calls to BTREE.EXTRACT go direct to Advanced Revelation's own indexing system. These calls will no longer be valid when data is no longer in Advanced Revelation, as the SQL Server maintains its own indexes that are invisible to BTREE.EXTRACT.
- Modifying filing systems are basically Advanced Revelation triggers these can become Transact-SQL triggers and rules.
What data types will you use for data residing on the SQL Server?
- The SQL Server has plenty of data types, enough to match all of Advanced Revelation's.
- Some character-type fields will become type CHAR which is fixed length, others should be cast as VARCHAR which is variable up to 256 bytes.
- The largest data type is TEXT which is allocated in 2K pages. For best performance, don't use too many TEXT columns.
- Make sure integer data is really integer. SQL Server doesn't allow spaces, only digits.
What Advanced Revelation data fields or data types will no longer be valid?
- Relational indexes are no longer needed.
- Text fields should not store the delimiter in the data for SQL Server interoperability.
- Text fields should be treated as text non-justified in Advanced Revelation windows.
What sub-tables will need to be added to support one-to-many relationships that were formerly multivalues?
- These tables will have to be built as part of the migration process.
File Conversion
There are several steps you must take on the SQL Server before any Advanced Revelation data can be moved there. A prerequisite is to establish the SQL Server data device, log device, and database.
The first step thereafter in the conversion process is defining the tables on the SQL Server. To do this, define the volume record which is to identify the server and database to Advanced Revelation. Then use the Definefile window to create the tables you will need on the server. This process will create the control file and the SQL Server table at the same time.
When defining a table, we recommend that you use the underscore character (_) rather than the period (.) in the table name as well as the column names.
Symbolic fields are not used as extensively when the data is on the SQL Server. Do not define or copy the symbolics initially. You can examine these on a case-by-case basis as the windows are modified. You can use the SQL Server to do some operations that were previously done with symbolics (see below).
Once the tables are defined on the SQL Server, you can copy the data. Several methods are available. If the file structure has not changed, such as in a code table, use the Advanced Revelation Setfile and Copy commands to copy the data directly across the filing systems. Since many Advanced Revelation files are indexed, you will probably have the best results when using the original file's proper name, and the SQL Server target file as the Qfile.
If the file structure has changed, you will need to write an R/BASIC program that reads the source file and parses the old record format into the new multiple record format. For example, this would be the case if you elected to normalize some multivalues a single row in the Advanced Revelation file will create a single row in one new table and multiple rows in another new table. This program is a simple one and can be done entirely in R/BASIC using READ and WRITE commands. Because these programs will only be used once, you should keep them simple.
Examine the data in your source files very carefully. Remember that the SQL Server is very strict about data types and will reject an integer with a leading or trailing blank. The cleaner the source data, the easier the migration. If the data violates a data type, the copy process will display an error from the SQL Server and wait for you to acknowledge it. Ideally, you would want to start the process and let it run without interruption.
Windows
You will need to make several changes to the windows in your application in order to accommodate the changes in the file structure. The items that you must consider are the table name, column names, symbolics, and R/BASIC code attached to the window.
The number of changes that you need to make depends entirely on the degree of change to the file structure and the performance sensitivity of the window. If the file structure has not changed, such as in a simple table maintenance function, the window will work as before after you simply adjust the table name and column name references in the window, the symbolics, and any R/BASIC code.
If the table has few columns or the window is not used often in the application, you would be advised just to get it working and leave it alone; it can always be enhanced later. This is one of the advantages of the bond. Instead, concentrate your time on the heavily-used data entry windows where performance is needed the most.
If the table name has changed, you can change the name of the table assigned to the window using the Customize selection on the Paint menu. This should be done with the old file and the new table attached since Paint will not allow you to select a template for changes when the corresponding dictionary is not attached.
To adjust the column names, go to the prompt window and change the Field name prompt to the new column name. If the column no longer exists in the table, remove the prompt from the window. These prompts can still appear on the window, but will have to be handled a different way.
Symbolic Fields
Several considerations apply to symbolic fields. All of the symbolics that are defined in the original file will work as long as table names and column names are adjusted to the new names. However, these operations are not as efficient on SQL Server tables and are likely candidates for conversion to a back-end process.
In general, symbolics on SQL Server tables should be limited to operations on the content of @RECORD. If another file is involved, these operations should be moved to back-end since this would involve a join. The data can still be displayed on the window but it will be handled in a different way as well.
Advanced Revelation developers typically perform lots of validation, rules, and window manipulation using R/BASIC code. Look for code that should become a rule or trigger on the back-end. The process of saving a row will activate these rules and triggers on the SQL Server.
To optimize the performance of the window and to provide the facility to display or edit fields from another table, create a commuter module to handle Replace read, Replace save, and Replace delete processes. Then implement these processes as calls to MSSQL_C or EXEC_TRANS_SQL. This opens up the possibility to call stored procedures and views on the SQL Server as opposed to the dynamic ANSI SQL created by the bond.
If the window contains Pre-save or Post-save operations, these will likely be Advanced Revelation rules or triggers that should be moved to Transact-SQL and removed from the window. Again, the more you move to the back-end, the faster the window will run.
XLATE Replacements
Your routine can also handle the jobs formerly done with symbolics, especially those based around XLATE. For example, your window may contain a symbolic field using XLATE that retrieves a description for a code entered in the window. A good way to handle this is to define a protected prompt on the window. Define such a field by turning off the autobind function of the window, and then create a prompt that points to a position in @RECORD beyond the last column position of the file assigned to the window.
To load this prompt with a value, attach a branch of your commuter module as the post-prompt process of the code field. This branch can call a stored procedure to validate the code that was entered and to return the description of the valid code. The description is then placed into the value of @RECORD, thereby populating the display-only prompt on the window. Conveniently, this data is thrown away when the record is saved.
The commuter module will also handle the fields that have been moved to another file. These fields are typically multivalues that were formerly in the file associated with the window. As in the symbolic example above, you can use a number-bound @RECORD position to support data from another file. These fields are populated by the Replace read process which performs an MSSQL_C or EXEC_TRANS_SQL query to retrieve the set of associated rows, and then loads the @RECORD positions with the data. Conversely, when the save operation occurs, the commuter module will reverse the process and update the changes to the target files, again using calls to the C routines.
Finally, the commuter module will also handle the job formerly done with Advanced Revelation relational indexes. Attach a branch of the commuter module to a softkey which performs a Transact-SQL query to select rows from the related table. These rows might only be a key column which is turned into a browse list for a window, or they might be complete rows of data which are loaded into associated multivalued fields.
Popups
The best way to do a table popup is to set up a simple subroutine for each table. This subroutine creates a Transact-SQL query, sends it to the server, and then hands the results set directly to the POP.UP function, which treats the data as an R-mode popup. The POP.UP function returns the field data in @ANS which can then be used anywhere. These types of popups are easy to create and extremely fast. Such a subroutine can be set up as an [F2] option on any field in place of the "P" code.
Popup subroutines like these can use stored procedures, views, or a variety of dynamic SQL scripts. Dynamic SQL works very well here. Have the popup subroutine prompt the user for selection criteria using the MSG or POP.UP routines. Based on user selection, format a Transact-SQL select statement and send it to the SQL Server.
Use the database administration power of the SQL Server to set up indexes on the fields that are most commonly used in these types of queries. Be sure to use the Transact-SQL Update Statistics statement periodically for maximum performance. A particular advantage of these indexes is that no changes need to be made in your application to utilize them.
Identify all of the prompts in your windows that use the catalyst calls that are specific to Advanced Revelation indexing. For example, the B or V catalyst codes use Advanced Revelation Btree indexes. Replace these catalyst calls with popup subroutines.
Reports
The best reporting tools for the SQL Server are SQL-based: Query-by-Example (QBE) or the interactive SQL window with the SQL Assistant. SQL-based tools are faster because there is no filing system interpretation of the operations. The SQL Script is handed more-or-less directly to the SQL Server for processing.
You can also run reports using Transact-SQL directly. Use the Server Access Window (SAW). This window is a functional equivalent of SAF except that SAW is faster. An advantage of this window is that you can use stored procedures and views on the SQL Server for maximum performance.
Existing R/LIST reports will require only that you adjust the table and column names to the new underscore convention. However, this assumes that you wish to keep all of your symbolics. The same rules apply here as in the window. You may want to experiment by generating the R/LIST code from the old system and replacing the R/BASIC read, write, and dictionary references with a Transact-SQL query utilizing a view on the SQL Server. Remember that doing table joins on the back-end is substantially faster than symbolic joins on the front-end.
R/BASIC Code
The conversion of programs written in R/BASIC is a topic too diverse to cover specifically. Generally, look for logic that can be implemented as a stored procedure on the back-end. For example, a routine that updates the status of detail line items can be converted to a stored procedure to do the same job. Change the file input/output statements to do a direct call to MSSQL_C or EXEC_TRANS_SQL.