Delphi in Depth FireDAC
Delphi in Depth FireDAC
FireDAC
________________________________
by
Cary Jensen
Delphi in Depth: FireDAC
Chapter Titles
Chapter Titles .................................................................................................................. v
Table of Contents ......................................................................................................... vii
About the Author ........................................................................................................ xvii
About the Technical Reviewers .................................................................................. xix
Acknowledgements ..................................................................................................... xxi
Introduction ..................................................................................................................... 1
Chapter 1 Overview of FireDAC ..................................................................................... 5
Chapter 2 Connecting to Data ..................................................................................... 15
Chapter 3 Configuring FireDAC ................................................................................... 47
Chapter 4 Basic Data Access ...................................................................................... 81
Chapter 5 More Data Access ..................................................................................... 109
Chapter 6 Navigating and Editing Data .................................................................... 145
Chapter 7 Creating Indexes ....................................................................................... 165
Chapter 8 Searching Data .......................................................................................... 197
Chapter 9 Filtering Data ............................................................................................. 217
Chapter 10 Creating and Using Virtual Fields .......................................................... 259
Chapter 11 Persisting Data ........................................................................................ 297
Chapter 12 Understanding FDMemTables ................................................................ 329
Chapter 13 More FDMemTables: Cloned Cursors and Nested DataSets ............... 369
Chapter 14 The SQL Command Preprocessor ......................................................... 397
Chapter 15 Array DML ................................................................................................ 425
Chapter 16 Using Cached Updates ........................................................................... 439
Chapter 17 Understanding Local SQL ...................................................................... 487
Appendix A Code Download, Database Preparation, and Errata ............................ 507
Index ............................................................................................................................ 519
Table of Contents vii
Table of Contents
Dedication ......................................................................................................... 3
Chapter Titles .................................................................................................................. v
Table of Contents ......................................................................................................... vii
About the Author ........................................................................................................ xvii
Cary Jensen ................................................................................................... xvii
About the Technical Reviewers .................................................................................. xix
Dmitry Arefiev............................................................................................... xix
Holger Flick ................................................................................................... xix
Jens Fudge ...................................................................................................... xx
Bruce McGee .................................................................................................. xx
Acknowledgements ..................................................................................................... xxi
Introduction ..................................................................................................................... 1
Who Is This Book For ...................................................................................... 2
Conventions ...................................................................................................... 2
Chapter 1 Overview of FireDAC ..................................................................................... 5
FireDAC Features ............................................................................................. 6
Cross-Platform Support .......................................................................................... 7
Exceptional Support for Databases ......................................................................... 7
Flexible Queries Using the SQL Command Preprocessor ...................................... 8
Blazing Performance with Array DML ................................................................... 8
Support for a Variety of Query Execution Modes ................................................... 9
Powerful Monitoring Capabilities ........................................................................... 9
Cached Updates .................................................................................................... 10
Result Set Persistence ............................................................................................ 10
Data Type Mapping ............................................................................................... 11
Local SQL .............................................................................................................. 11
Additional Features ............................................................................................... 12
Connection Recovery ................................................................................................................... 12
Advanced Transaction Support..................................................................................................... 12
Built-In Dialog Support ................................................................................................................ 12
Support for Database-Specific Services........................................................................................ 12
Customizable Data Access ........................................................................................................... 13
Batch Move Support ..................................................................................................................... 13
viii Delphi in Depth: FireDAC
General Updating.......................................................................................................................... 72
Locking......................................................................................................................................... 72
Refreshing .................................................................................................................................... 73
Automatic Incrementing ............................................................................................................... 73
Posting Changes ........................................................................................................................... 74
Transaction Options .............................................................................................. 75
Isolation Level .............................................................................................................................. 76
Update Ability .............................................................................................................................. 76
Automatic Committing ................................................................................................................. 77
DBMS-Specific Parameters .......................................................................................................... 77
Disconnection Action ................................................................................................................... 78
Nesting ......................................................................................................................................... 78
Understanding UpdateOptions.UpdateMode .................................................. 78
Chapter 4 Basic Data Access ...................................................................................... 81
The User Interface and Data Binding ............................................................. 85
Navigation and VCL Data Links ........................................................................... 86
The DBNavigator ......................................................................................................................... 86
Multi-Record VCL Controls and Navigation: DBGrid and DBCtrlGrid ...................................... 89
Navigation and LiveBindings ................................................................................ 91
The BindNavigator ................................................................................................ 92
Position-Related LiveBindings .............................................................................. 94
Understanding FDTable.................................................................................. 97
Configuring an FDTable ....................................................................................... 97
Executing Datasets at Design Time ....................................................................... 99
Executing DataSets at Runtime ........................................................................... 100
When Should You Connect? ................................................................................ 100
Live Data Window ............................................................................................... 102
Executing Queries and Stored Procedures .................................................... 103
Executing Queries ............................................................................................... 103
Executing Stored Procedures ........................................................................ 105
Chapter 5 More Data Access ..................................................................................... 109
Parameterized Queries and Stored Procedures ............................................. 109
The Advantages of Parameters ............................................................................ 110
Greater Flexibility ...................................................................................................................... 110
Improved Performance ............................................................................................................... 110
Prevention of SQL Injection ....................................................................................................... 111
Defining Parameters at Design Time .................................................................. 112
Parameterized FDQueries and the Query Editor................................................ 115
x Delphi in Depth: FireDAC
Twitter: http://twitter.com/caryjensen/
Blog — Let's Get Technical: http://caryjensen.blogspot.com/
Company web site: http://www.JensenDataSystems.com/
LinkedIn: https://www.linkedin.com/in/cary-jensen-31a921
Email: info@jensendatasystems.com
About the Technical Reviewers xix
Dmitry Arefiev is the creator of AnyDAC, the product that eventually became
FireDAC. He is currently the FireDAC architect for Embarcadero Technologies,
the makers of RAD Studio and Delphi.
Email: darefiev@gmail.com
Holger Flick
Dr. Holger Flick is a well-known member of the Delphi Community, and has
worked with Delphi and Borland Pascal before Delphi. While achieving a
Degree in Computer Science and a Doctorate of Engineering, he was part of
several developer teams at Borland and later CodeGear. This gave him the
means to gain first-hand knowledge of the tools and frameworks. He wrote
several articles and spoke at many Delphi Road Shows, seminars, and
conferences. When developing software with Delphi, his focus is on database-
driven applications for both desktop and mobile platforms. Since 2016, Holger
heads his new brand “Flix Engineering” and is available for training,
development, and consulting services.
URL: https://flixengineering.com/blog
Twitter: https://twitter.com/hflickster
LinkedIn: https://de.linkedin.com/in/hflick
Email: info@flixengineering.com
xx Delphi in Depth: FireDAC
Jens Fudge
Jens Fudge has been working with Delphi since 1995, when it first came out. He
has built mainly database systems for a lot of various customers in different
areas like railroad companies, airports, cement factories and even a government
application. Jens is an Embarcadero Delphi MVP, and works as a trainer and
consultant for many different companies, and is also a frequent speaker at
international and national conferences. Apart from being a Delphi developer and
consultant, Jens also brews beer, wine and mead, and shoots archery. The latter
has inspired Jens to the name of his company, which is Archersoft. Jens won the
Gold medal in archery at the Paralympic Games in Barcelona, Spain in 1992.
Email: Jens.fudge@archersoft.dk
Bruce McGee
Blog: http://www.glooscap.com/
Acknowledgements xxi
Acknowledgements
Software development is an ironically solitary task that typically involves a lot
of people. In addition to the many hours that we spend in front of our
computers, using our skills to create value out of thin air, we relying heavily on
the talents of others, from the designers who write the specifications, to the
quality assurance folks who ensure that our work is up to the task, from the
managers who make sure that we receive the necessary resources to do our jobs,
to our client’s whose needs make our efforts so worthwhile.
In that respect, writing a book is very similar. In addition to those countless
hours of research and writing, each author depends on the support and input of
many people in getting a book to press. And this project is no different.
To begin with, I want to thank my wife and business partner, Loy Anderson,
who has become a tireless editor and organizer. Without her efforts this book
would never see the light of day. Not only does she endlessly read copy for
readability, spelling, and grammar, but she also acts as publisher, working with
the print and ebook publishers to ensure that the final product is one of which
we can all be proud.
Next, I want to thank this book’s technical reviewers. To begin with, I want to
thank Dmitry Arefiev, the creator of FireDAC. I am so grateful that he was able
to review many of the chapters of this book, especially the more complex
chapters. His comments and corrections ensured the accuracy of so many
technical details. And he did this while working for the release of RAD Studio
10.2 Tokyo. I also want to note that his commitments to Embarcadero
Technologies prevented him from reviewing every chapter prior to this book
heading to the printers, so you have me to blame for those remaining
inaccuracies. I will strive to resolve those using the errata, whose URL appears
in Appendix A.
Next I want to thank Dr. Holger Flick. He contacted me via LinkedIn last fall
with questions regarding my last book: Delphi in Depth: ClientDataSets. His
questions led me to choose FireDAC as the topic of this book, and I began
working on it as soon as I completed the Delphi Developer Days 2016 course
book (which I wrote with Nick Hodges, Director of Engineering at Embarcadero
Technologies). Holger even offered to be a technical reviewer, an offer that I
could not refuse. In addition to reviewing the book, he is planning on translating
this book into German, something that I find very exciting.
xxii Delphi in Depth: FireDAC
I also want to thank Jens Fudge of ArcherSoft. Like Holger, Jens offered many
suggestions and observations that helped improve the overall quality of the
book. I am deeply indebted to both of them. Finally, I want to thank Bruce
McGee of Glooscap Software. His comments and suggestions were also quite
helpful, and he showed a remarkable talent for catching wording errors that
managed to escape the rest of us.
I also want to acknowledge the help, encouragement, and advice of my many
friends and colleagues. To begin with, I want to thank Jim McKeeth, Chief
Developer Advocate and Engineer at Embarcadero Technologies. In addition to
his encouragement, Jim always ensured that I had the latest information on RAD
Studio.
I also want to thank my Delphi Developer Days co-presenters, including Bob
(Dr.Bob) Swart of Bob Swart Training and Consultancy, Ray Konopka of Raize
Software, and most recently, Nick Hodges of Embarcadero Technologies. It has
been a great pleasure to write and present with these pillars of the Delphi
community.
I also want to give a great big thanks to my long-time friend, and former Delphi
Developer Days co-presenter, Marco Cantù, the RAD Studio Product Manager
at Embarcadero Technologies. I’ve known Marco for more than 20 years, and
we share both a love for writing and a love for Delphi. I am grateful that we
have such a thoughtful and dedicated person leading Delphi into the future.
I also want to thank the development teams at Embarcadero Technologies, from
Nick Hodges and Dmitry Arefiev on down. Delphi (and C++Builder) are
wonderful products, and I want these people to know that we appreciated their
work.
Finally, I want to thank the many members of the Delphi community. We’re in
this together, and your commitment to this remarkable product makes the
arduous task of writing books like this one worth the effort.
Introduction 1
Introduction
I remember when I first heard about Delphi. It was 1993, and I was serving on
the Paradox Advisory Board for the annual Borland International Conference.
After a presentation by Philippe Kahn at the Borland International headquarters
in Scotts Valley, California, in which he introduced us to this next generation
Pascal compiler and IDE (Integrated Development Environment), one of the
other board members turned to me and said that Delphi was the compiler that we
Paradox developers had been waiting for.
What this story reveals is that I entered the Delphi community as a database
developer. In fact, I was a columnist for The Delphi Informant Magazine, from
the very first issue, and my column was named DBNavigator, reflecting my
emphasis on multiuser databases. I continue to share this enthusiasm for
database development to this very day.
As a Delphi database developer, I was keenly aware of the great data-related
breakthroughs introduced in Delphi. Though considered an obsolete technology
now, when it was introduced, the Borland Database Engine was ahead of its
time. Delphi also introduced the data module (Delphi 2), and the ClientDataSet
(Delphi 3 Client/Server).
I also had my share of disappointments, namely dbExpress (Kylix and Delphi
6). While dbExpress was fast, it was more of a C-like, pass-through SQL
framework that violated many of the principles of the TDataSet interface. When
SQL Links for Windows, and subsequently the BDE itself, were deprecated, it
felt like database developers had to turn to third-party solutions in order to
continue working with databases using the Delphi way.
That changed in the spring of 2013, when Embarcadero released RAD Studio
XE3. Embarcadero had acquired AnyDAC, and re-branded it as FireDAC.
FireDAC was the perfect replacement for the BDE, supporting both the
TDataSet interface as well as a large number of database servers. Finally, Delphi
database developers had a flexible, powerful, and easy to use framework for
working with data.
Over the intervening years, FireDAC has matured and improved, eventually
becoming the unequivocal Delphi data access mechanism of choice. I used it in
my preceding book, Delphi in Depth: ClientDataSets Second Edition, and I have
used it exclusively for database examples for Delphi Developer Days material
2 Delphi in Depth: FireDAC
over the last three years. And now, I have expressed my confidence in FireDAC
by writing this book.
Conventions
Most of the examples in this book make use of FDQuery components, which are
used to execute SQL (Structured Query Language) statements. In this book, I
am pronouncing SQL as “es”-“que”-“el,” and not “sequel.” What this means is
that I will say “an SQL statement,” instead of “a SQL statement.”
Another convention that I use is to drop the T in most references to a class. For
example, while I will occasionally speak strictly about a class, say TFDQuery, I
will most often refer to instances of this class as FDQueries, and then more
conversationally as “queries.” My main goal is readability. To me, the constant
use of the T in a class name makes the text harder to read.
Another convention relates to the sample projects that accompany this book. In
almost every case, when I show a code segment, it is code that can be found in a
sample project from the code download. The first time I refer to a given project
in a chapter, I include a note indicating the name of the project as it appears in
the code download. I do not repeat this note in subsequent references to that
project in the same chapter.
Introduction 3
Chapter 1
Overview of FireDAC
FireDAC is a comprehensive collection of components that implement Delphi's
traditional TDataSet interface. In this respect, it is comparable to the Borland
Database Engine (BDE), dbExpress, InterBase Express, and dbGo, RAD
Studio's TDataSet components for ActiveX Data Objects (ADO).
Embarcadero added FireDAC to Delphi in the spring of 2013 after acquiring
AnyDAC from DA-Soft Technologies, and it first appeared in Delphi XE3. At
that time, the components still used the AD prefix. By Delphi XE5, the
component prefix was formally changed to FD, reflecting the FireDAC moniker.
When Delphi first shipped it had one data access framework, the BDE, although
at the time it was referred to as ODAPI (Open Database Application
Programming Interface). ODAPI was initially released with Quattro Pro for
Windows, and later Paradox for Windows. It was later renamed IDAPI
(Independent Database API), and then again as IDAPI (Integrated Database
API). Even these technologies were an outgrowth of the Paradox Engine, a DOS
set of overlays which, when released back in September of 1991, represented
one of the first times that the table and record locking mechanisms of a database
(Paradox, of course) was made available to developers using other tools, which
in this case were Turbo Pascal and Turbo C++.
While the BDE was a breakthrough technology in its early years, providing a
fast, independent data access layer, it was cumbersome to install, used a lot of
network bandwidth, and had limited support for remote database servers, this
being provided by a set of DLLs (dynamic link libraries) referred to as Borland
SQL Links for Windows. Over time, it became increasingly obsolete.
The need for a new data access mechanism for Delphi became even more
apparent during the development of Kylix, a Delphi-based compiler and IDE
(integrated development environment) for Linux. Porting the BDE to Linux was
ruled out, and dbExpress was born. dbExpress is a high-speed client/server data
access framework based largely on pass-through SQL (Structured Query
Language).
6 Delphi in Depth: FireDAC
The dbExpress framework had one major drawback, however. In most cases
converting a BDE project to dbExpress required a major refactoring of the data
access logic, and dbExpress did not support the old-style file server databases
such as Paradox, dBase, or MS Access until Delphi XE2. As a result, for many
BDE developers, dbExpress was a poor option.
To make matters worse, Borland SQL Links for Windows, the drivers that
supported remote database servers such as Oracle, InterBase, SQL Server, and
the like, were deprecated. This happened shortly after the release of dbExpress.
Deprecation of the BDE itself followed a few years later, leaving Delphi
developers without a good migration path without going outside of the product.
This has changed with the introduction of FireDAC. Conversion from the BDE
to FireDAC is more or less smooth, and Delphi even ships with a tool, named
reFind, that helps with much of the conversion process. It is for this reason that I
use FireDAC in all of my new projects (and in all of the database projects
included in my source code contributions to Delphi Developer Days). In
addition, I think that a good argument can be made for migrating legacy
applications to use FireDAC when a major revision is scheduled. Yes, it is that
good.
In this chapter, I am providing a high-level overview of FireDAC, and here I
will introduce most of FireDAC's more interesting features. Many of these
features are discussed in depth in later chapters of this book.
FireDAC Features
There are many reasons to use FireDAC, and a lot of these have to do with the
feature set that FireDAC brings to the table. While FireDAC supports
capabilities similar to other TDataSet collections available in Delphi, there are a
number of features that are unique to FireDAC, and which make FireDAC
extremely attractive.
To begin with, FireDAC does an exceptional job supporting the TDataSet
interface, going so far as to introduce some of the advanced features found in
only some of the TDataSet implementations (ClientDataSet, for example). I’ve
already made this point, so I won’t belabor it further. But there is much more to
FireDAC than simple TDataSet conformity. FireDAC supports an exceptional
collection of features and capabilities that make it the clear choice for
implementing data access in Delphi applications.
For the remainder of this chapter, I am going to highlight the many features of
FireDAC that really make it shine. For those more involved features, I am going
Chapter 1: Overview of FireDAC 7
to keep these descriptions at a high level, saving the specific details for later
chapters where the feature is examined in more detail.
Cross-Platform Support
FireDAC is supported on all of Delphi's platforms. You can use FireDAC in
Windows applications, Mac OS applications, iOS, Android, and Linux
applications, and where RAD Studio supports 64-bit compilation, both 32-bit
and 64-bit versions.
To what extent a given platform is supported depends to some extent on the
availability of an appropriate client library. For example, all databases are
supported on the Windows platform, in part because every database vendor
creates a client API for Windows. There are few client APIs, however, for the
mobile platforms. For example, both InterBase and SQLite are supported on
both iOS and Android. However, many databases do not publish a client library
for the iOS or Android ARM platforms.
The good news is that even though some platforms have limited client libraries,
you can always use DataSnap or RAD Server (previously called Embarcadero
Mobility Services, or EMS) to serve data to your mobile devices.
Exceptional Support for Databases
FireDAC provides drivers for almost every major relational database, both
commercial and open source. On the commercial side, you find native support
for Oracle, IBM DB2, MS SQL Server, InterBase, SAP SQL Anywhere, SAP
Advantage Database Server, and Teradata. Open source databases natively
supported by FireDAC include Firebird, MongoDB, MySQL, PostgreSQL, and
SQLite.
If you don't find your particular database in that list, there's no need to worry.
FireDAC supports two bridging drivers, one for ODBC (Open DataBase
Connectivity) and another for dbExpress. Using the FireDAC ODBC driver
permits you to work with virtually any relational database in existence, as
ODBC is about as universal as it gets. Even COBOL data files are supported by
ODBC.
What's particularly interesting about FireDAC's native drivers is that many of
these provide you with access to features specific to the associated database. For
example, FireDAC supports the return of multiple result sets from an SQL
command execution against those databases that support that feature, such as
InterBase, MS SQL Server, Oracle, and PostgreSQL. Likewise, FireDAC's
native drivers support database alerts (notifications or events).
8 Delphi in Depth: FireDAC
It should be noted that full support for the databases listed above requires that
you have an Enterprise-level Delphi license or higher, or that you have
purchased the FireDAC Client/Server Add-On Pack for Professional-level RAD
Studio products. RAD Studio Professional (as well as Delphi Professional and
C++Builder Professional) is licensed only for local file server database access,
such as MS Access, Paradox, SQLite, and Advantage Local Server. Fortunately,
Delphi Professional (and C++Builder Professional) come with both the
InterBase developer edition and the FireDAC InterBase driver, so you can use
those versions with this book without having to first buy the FireDAC
Client/Server Add-On Pack
Flexible Queries Using the SQL Command Preprocessor
The SQL command preprocessor, formerly called Dynamic SQL, permits you to
write flexible SQL that includes macros, conditional substitution, identifier
substitution, and ODBC-like escape functions. The SQL command preprocessor
can perform a variety of manipulations on your SQL before it sends that SQL to
the underlying database.
One of the more useful applications of the SQL command preprocessor is to
permit you to write one set of queries that can be executed against a variety of
databases, even when those databases support different dialects of SQL. For
example, the {CONVERT(…)} escape function will be expanded into the
TO_CHAR keyword when executing against an Oracle database, but into
CONVERT when connected to Microsoft SQL Server.
Statements that use conditional substitution look similar to the {$IF} conditional
compilation compiler directives in the Delphi language, though the substitution
is not performed at compile time. Instead, this operation occurs at runtime, and
is performed by the SQL command pre-processor, as part of SQL command
preparation and right before the query is submitted to the underlying database.
The SQL command preprocessor has utility beyond supporting multiple
databases. For example, the use of macro substitution in your SQL statements
permits you to write queries whose tables, fields, or WHERE clause predicates
are not known until runtime. (A predicate is a Boolean expression used in SQL,
such as in WHERE clauses, HAVING clauses, and joins). All you need to do is
ensure that you have bound valid values to those macros before executing the
query, and the command preprocessor will take care of updating the resulting
SQL.
Blazing Performance with Array DML
Array DML (Data Manipulation Language, a category of SQL operations),
provides a mechanism that supports high-speed data manipulation using
Chapter 1: Overview of FireDAC 9
parameterized query and stored procedure execution. This feature is useful when
you have a single parameterized query or stored procedure that needs to be
executed repeatedly, with each execution using different values in the
parameters.
To use Array DML, you create a parameterized SQL statement, along with an
array of parameters, with one row of the array of parameters for each instance of
the query or stored procedure execution. When executing, FireDAC sends both
the parameterized query and the array of parameters to the database engine,
which is then responsible for performing the parameter binding and query
executions one time for each element in the array. The alternative is to execute
your queries one at a time, manually binding the next set of parameters prior to
each subsequent execution.
Array DML can produce unparalleled performance, benefiting from a reduction
in network traffic, database roundtrips, and delegating the parameter binding to
the server. The advantages of Array DML are particularly beneficial for extract,
transform, load (ETL) operations, such as those often associated with data
warehousing.
Performance varies by database, as some databases natively support the
operations performed by Array DML. For the others, FireDAC emulates the
operations performed by Array DML, which might produce only marginal
performance improvements. Databases that benefit most from array DML
include, but may not be limited to, InterBase, Firebird, IBM DB2, MS SQL
Server, Oracle, and SQLite.
Support for a Variety of Query Execution Modes
FireDAC provides support for a number of different query execution modes,
both blocking and non-blocking. FireDAC can execute a single non-blocking
query asynchronously, and provides a mechanism for canceling time-consuming
blocking queries.
If you need to execute more than one FireDAC query asynchronously, you can
execute those queries in worker threads (using Delphi’s TThread class),
providing each thread with its own FDConnection and query (or stored
procedure) object. When used with a FireDAC-maintained pool of connections,
connection overhead can be minimized.
Powerful Monitoring Capabilities
FireDAC includes components and utilities that make it easy to trace FireDAC's
interaction with the underlying databases. Using the FDMoniFlatFileClientLink,
you can write trace information to a file, or you can implement your own
10 Delphi in Depth: FireDAC
information about changes to your data over multiple sessions and an extended
period of time.
What's interesting about LoadFromFile is that it allows you to work with data
offline. For example, after loading data from a database into an FDQuery, you
can save the query results to a local file, after which the data can be loaded
again without the presence of the database connection. Here again, FireDAC
goes beyond the traditional usage of dataset persistence. Combining offline
mode, dataset persistence, and cached updates mode, your application can work
disconnected from a database and persist the data between application run
sessions. Later, when the database is once again available, any changes to the
data can be applied to the underlying tables.
Data Type Mapping
Just as FireDAC gives you a lot of control over connection options, it also
permits you to customize the mapping of data types. You control the mapping of
data types using the FormatOptions property of your FireDAC object
(FDConnection, FDQuery, or another FireDAC entity). If you set
FormatOptions.OwnMapRules to True, you can define one or more FDMapRule
instances in the FormatOptions.MapRules property. Each FDMapRule describes
to FireDAC a source data type, database data type name and/or a field name,
and the data type to which it should convert the data in the destination.
Data type mapping is especially valuable in applications that support two or
more different database types, permitting you to map incompatibilities to a
common set of data types. It can also be invaluable when you are migrating
from one database (for example, a file server database) to another (a remote
database, for instance), or from one set of data access components (for example,
BDE) to FireDAC. Data type mapping in this case permits you to maintain a
consistent user interface despite changes in the underlying data types.
Local SQL
Local SQL permits you to execute SQL SELECT, INSERT, and UPDATE
statements against any dataset. (Other DML statements may be allowed, but the
ones I listed are the obvious operations that most developers will be interested
in.) For example, you can perform a query against an FDTable to gather simple
aggregate statistics like SUM and AVG from the data it contains. Similarly, you
can query an FDQuery and perform a left outer join to an FDStoredProc
component (in which case the stored procedure must return a result set).
Similarly, you can load a text file into an FDMemTable and execute an SQL
SELECT query against it.
12 Delphi in Depth: FireDAC
Importantly, this ability to query datasets is not limited to FireDAC datasets. For
instance, you can create a query that performs a join between an FDQuery, an
SQLDataSet, and a ClientDataSet.
FireDAC performs this SQL slight-of-hand by using the TDataSet API and the
SQL engine from the SQLite open source project. It's a very clever technique,
and one that enables a whole range of interesting data-related solutions that
would otherwise be difficult or impossible to implement. Your joins don't have
to be as complicated as I've described, but the benefits are obvious.
Additional Features
I've outlined many of the major features of FireDAC, but there's more to
FireDAC than what I've described so far. Here are some additional features of
FireDAC that might interest you.
CONNECTION RECOVERY
FireDAC provides transparent connection recovery. When an FDConnection's
ResourceOptions.AutoReconnect property is set to True, FireDAC will attempt
to re-connect to a database when the connection is dropped. This feature is
especially valuable in environments where the network connection is unstable or
is not reliable.
ADVANCED TRANSACTION SUPPORT
FireDAC supports a range of transaction options, including simultaneous and
nested transactions. True simultaneous transactions are supported on InterBase
and Firebird. Nested transactions are emulated on those databases that support
save points.
BUILT-IN DIALOG SUPPORT
One of the lesser, though certainly welcome, features of FireDAC is its
predefined dialog boxes. FireDAC includes a handful of useful, specific purpose
dialog boxes that can be added to your application simply by dropping a
component onto your form or data module. For example, the
FDGUIxLoginDialog can be used to automatically display a dialog box to
capture a user's database username and password. Similarly, the
FDGUIxErrorDialog can be used to display exceptions raised by FireDAC.
SUPPORT FOR DATABASE-SPECIFIC SERVICES
FireDAC makes it easy to use some of the supported database services, such as
backup, restore, validation, SQLite custom functions, and more. These services
are accessed using the components from the FireDAC Services tab of the Tool
Palette. Here you will find components that expose the underlying services for
many of FireDAC's supported databases.
Chapter 1: Overview of FireDAC 13
Chapter 2
Connecting to Data
You connect to a database using a connection component (FDConnection), and
then typically wire one or more FireDAC datasets to that connection. At this
high level, these steps are no different than from any other data access
framework based on the TDataSet interface. At a lower level, however, the
specific steps you take are uniquely FireDAC.
FireDAC supports three distinct options for connecting to your database. These
are:
Temporary connections
Persistent connections
Private connections
Which type of connection you use affects how the connection can be used. For
example, a persistent connection, which is a named connection, relies on an
externally defined connection definition file, and therefore, can be shared by
two or more applications. In addition, a persistent connection can be pooled,
given that the connections are associated with a common FDManager, which is
to say that connections can be pooled within a single application.
Private connections are also named connections, and can be pooled. Unlike
persistent connections, however, they are defined at runtime, exist only for the
duration of the application session, and cannot be shared between applications.
On the other hand, they do not rely on an external connection definition file, and
they can be pooled.
Finally, temporary connections, which are not named, cannot be pooled and
cannot be shared. They are, however, very easy to define at either runtime or
design time, and do not rely on an external connection definition file.
addition, they are easy to deploy given that they do not require an external file
definition. The following sections demonstrate how to create unnamed
connections, referred to as temporary connections.
Creating Temporary Connections
You create a temporary connection using the FDConnection component. The
following steps demonstrate how to create a temporary connection to InterBase
(a database that was probably installed for you when you installed Delphi. If
not, you can download the Developer Edition of InterBase for free).
With InterBase installed and running as a service locally, use the following steps
to connect to the sample employee.gdb database that is part of Delphi's sample
databases:
1. Select File | New | VCL Forms Application from Delphi's main menu to
create a new project.
2. Add to this project a data module by selecting File | New | Other, and
then selecting the Data Module template from the Delphi Files page of
the Object Repository.
3. Add an FDConnection component from the FireDAC page of the Tool
Palette to the data module.
4. Double-click the FDConnection component (or right-click it and select
Connection Editor...) to view the FireDAC Connection Editor. This
FireDAC Connection Editor, shown in the following Figure 2-1, has
already had its Driver ID set to IB, which is why the connection
parameters for the InterBase driver are shown.
Chapter 2: Connecting to Data 17
5. Set Driver ID to IB. Once you set the value of Driver ID, the properties
associated with the FireDAC InterBase driver are displayed in the
configuration pane, as seen in the preceding figure. Set Database to the
employee.gdb database, which in the most recent versions of Delphi
(Delphi 10.2 Tokyo at the time of this writing) is located in one of the
following directories (with no line feeds):
18 Delphi in Depth: FireDAC
C:\ProgramData\Embarcadero\InterBase\gds_db\
examples\database\
or
C:\Users\Public\Documents\Embarcadero\Studio\19.0\
Samples\Data
Note: If you are using the developer editor of InterBase, the service might not be
running. In that case, you may have to start it using the Services applet from
Administrative Tools.
This configuration was particularly easy since the InterBase server is running on
the local machine. Depending on the database that you are connecting to, where
it is located, and details about your configuration, you will likely enter more
parameters than we did here.
In addition to connection parameters, the FireDAC Connection Editor permits
you to configure many of the FDConnection properties. These are found on the
Chapter 2: Connecting to Data 19
Options tab of the FireDAC Connection Editor, shown in the Figure 2-2.
Finally, the SQL Script tab permits you to enter SQL statements that you want
to execute ad hoc against this connection. This can be very useful if you want to
perform a quick operation such as viewing some data or creating a new table.
When you are done setting your connection parameters and options, close the
Connection Editor by clicking the OK button.
We are now ready to finish this simple example.
22 Delphi in Depth: FireDAC
Figure 2-4: The SQL Command tab of the FireDAC Query Editor
9. In the SQL Command pane (the upper part) enter the following query:
You can now test the query, if you like, by clicking the Execute button.
When you are done, click OK to return to the form. Clicking OK saves
the query you entered into the FireDAC query’s SQL property. If you
instead click Cancel, the query text is lost.
Chapter 2: Connecting to Data 23
10. Now, set the FDQuery's Active property to True. At this point you will
be once again challenged for the password associated with sysdba. Enter
masterkey when prompted and click OK. We had to re-enter the
password because we left the LoginPrompt property of the
FDConnection to True so that the user will be challenged for a username
and password when they first run the program. In order to supply that
dialog box, add an FDGUIxLoginDialog component to the data module
from the FireDAC UI tab of the Tool Palette. Now set the
FDConnection’s LoginDialog property to FDGUIxLoginDialog1.
11. Return to your main form and use your data module unit by selecting
File | Use Unit, and select your data module's unit name from the
displayed list.
12. Now add a DBGrid, a DBNavigator, and a DataSource component to
your main form. Set the DataSource property of both the DBGrid and
DBNavigator to DataSource1, and the DataSet property of the
DataSource to DataModule1.FDQuery1 (or whatever name is
appropriate for your data module and data source). Your form should
now look something like that shown in Figure 2-5.
At this point everything looks great, and if we were using one of the other data
access frameworks (BDE, dbExpress, etc.) we could hit Run (F9) and the
application would just run. But there is a little quirk with FireDAC, and that is
that it requires some resources that are in units that do not necessarily appear in
your uses clause by default. Depending on the version of Delphi you are
running, if you hit Run (F9) now you might get a runtime error like the one
shown in Figure 2-6.
Later versions of FireDAC will ensure that the proper resources are
automatically added, beginning with Delphi XE8. Nonetheless, if you get this
error, there are two components that you can add to your project that will add
the necessary units to your uses clause. The first component is the physical
driver link component associated with the database driver you are using. For
InterBase, this component has the name FDPhysIBDriverLink, and there are
similarly named components for the other supported FireDAC drivers. The
second component is the FDGUIxWaitCursor component.
You will find the physical driver link components on the FireDAC Links page
of the Tool Palette, and the FDGUIxWaitCursor component on the FireDAC UI
page of the Tool Palette.
13. If necessary, add the two required components (FDPhysIBDriverLink
and FDGUIxWaitCursor).
Here is the interesting part. FireDAC doesn't really need these components, and
you do not need to set any of their properties either. What FireDAC really wants
is the units associated with these components in your uses clause so that they get
initialized. The FDPhysIBDriverLink component causes the insertion of the
FireDAC.Stan.Intf, FireDAC.Phys, FireDAC.Phys.IBBase, and
FireDAC.Phys.IB units to your uses clause (at least in XE5 and later. A
distinctly different set of units is added in versions prior to XE5). Similarly, the
FDGUIxWaitCursor component results in the addition of the following units:
Chapter 2: Connecting to Data 25
This data can now be viewed, navigated, and edited in a manner similar to that
supported by the BDE's TTable component, even though this data is associated
with an SQL query. In most of Delphi's data access mechanisms, queries are not
directly editable, at least not by default. But in FireDAC, query results are
editable, for the most part.
Defining a Temporary Connection Using FDConnection.Params
By default, the configuration you define in the FireDAC Connection Editor is
stored in the Params property of the TFDConnection. You can easily see this in
the project you just created by using the following steps:
1. With your data module displayed in the designer, select the
FDConnection component.
2. Select the Params property in the Object Inspector and click the ellipsis
that appears in order to open the String List Editor, shown in Figure 2-8.
In this case, the String List Editor lists only four parameters: Database, User
Name, Server, and DriverID. Remember, however, that this was a very simple
configuration. We are connecting to a database server running on the local
machine, and we did not leverage any additional capabilities of the Connection
Editor, such as pooling, OS authentication, role name, character set, or any of a
number of available configurable parameters. In most cases, a successful
connection requires more parameters than did this application.
Nonetheless, the bottom line is that those parameters that you set using the
Definition tab of the Connection Editor are written to the Params TStrings
property, and those can be easily viewed by viewing the property editor of the
Params property. Similarly, those properties that you set using the Options tab
of the FireDAC Connection Editor are written to the corresponding instance
properties of the FDConnection.
If you are familiar with the connection parameters of your database driver, you
can enter these values into the Params property editor manually at design time to
achieve the same effect as that accomplished using the Connection Editor.
Defining a Temporary Connection at Runtime
If you want to define your database connection at runtime, you can also use the
Params property. In fact, you can simply copy the same strings that you find in
the Params property at design time following a successful connection to a
database, and use these values to execute the corresponding assignments at
runtime.
For example, consider the values displayed in the String List Editor in the
preceding figure. The following OnCreate event handler uses these values to
configure the FDConnection at runtime
You can also set the Params property using the FDConnection's
ConnectionString property. In that case, you assign a semicolon-separated list of
28 Delphi in Depth: FireDAC
the name-value pairs that define your connection. For example, the following
code achieves the same results as did the preceding code sample:
Note: When working with a database from a worker thread, it is important that
the thread connects to the database using an FDConnection distinct from any
other thread. From the perspective of the database, each connection is a
separate user, thereby allowing the multi-user capabilities of the database to
resolve competition for data access by individual threads.
C:\Users\Public\Documents\Embarcadero\Studio\
FireDAC\FDConnectionDefs.ini
30 Delphi in Depth: FireDAC
If you right click a node associated with a FireDAC named connection, you can
choose to modify its definition, rename it, delete it, refresh it, or close it (if you
have connected to it). Furthermore, once you connect to it, you can inspect its
metadata, including tables, columns, indexes, views, and stored procedures, as
shown in Figure 2-9.
To create a new named connection using the Database Explorer, right-click the
node associated with the FireDAC database driver you want to use, and select
Add New Connection, as shown in Figure 2-10.
Chapter 2: Connecting to Data 31
Next, you'll be asked to provide a name for this named connection, as shown
here.
Figure 2-11, where you define the parameters of this named connection. You
complete the connection definition just as you did earlier in this chapter when
you created the temporary connection using the FireDAC Connection Editor.
The primary difference is that once you save this configuration, it will be
associated with the connection definition name, and that name can be used to
create a persistent connection.
Figure 2-11: Named connections are also configured using the FireDAC
Connection Editor
Chapter 2: Connecting to Data 33
Figure 2-12: You can use the FireDAC Explorer to create named
connections
When the root node of the Objects Explorer is selected, the locations of the
connection definition file (the same file that you update when you use the Data
Explorer) and the driver configuration file are shown in the right-hand pane.
These files are simple INI files that hold the connection definition name/value
pairs in sections whose names are associated with a named connection. These
files are located in the following directory:
C:\Users\Public\Documents\Embarcadero\Studio\FireDAC
If you select one of the connection definition names, the saved parameters for
that connection are shown in the right-hand pane. Furthermore, you can expand
that connection definition to examine the underlying database's tables, stored
procedures, and other similar features.
The following steps walk you through the process of creating a named
connection for the dbdemos.mdb MS Access database that gets installed when
you install Delphi:
Chapter 2: Connecting to Data 35
C:\Users\Public\Documents\Embarcadero\Studio\
19.0\Samples\Data
At this point, the FireDAC Explorer should look something like that
shown in Figure 2-13.
4. You can now save this definition and open the connection. To begin
with, right-click the connection named NewMSAccess and select Apply
from the displayed context menu. Next, test opening the connection by
clicking the plus sign (+) that appears to the left of the connection. You
will be asked to supply a user name and password, but this database is
not encrypted, so you can simply click the OK button without entering a
username or password.
By saving the connection, you are creating a named connection, which is saved
to the FDConnectionDefs.ini file. When you open the connection in the
FireDAC Explorer you can use it to view your available tables, views, and
stored procedures. You can also execute SQL statements against the connection.
Since both the FireDAC Explorer and the Data Explorer save connection
definition names in the same file, you can continue to work with your new
connection using either of these tools. (Although you might have to close and
then re-open Delphi before a connection that you have newly created in the
FireDAC Explorer becomes available in the Data Explorer.)
Creating a Persistent Connection
A persistent connection employs a named connection whose definition exists in
an external ini file. This named connection may be created at either design time
or at runtime.
When created at design time, the connection definition file must be in a location
specified by a Windows registry setting, which is found in the following key:
HKEY_CURRENT_USER\Software\Embarcadero\FireDAC\ConnectionDefFile
[FDConnectionDefs.ini]
Encoding=UTF8
[NewMSAccess]
DriverID=MSAcc
38 Delphi in Depth: FireDAC
Database=C:\Users\Public\Documents\Embarcadero\
Studio\18.0\Samples\Data\dbdemos.mdb
Each named connection appears as a section name in the ini file, and the
name/value pairs in this section correspond to the same name/value pairs that
would otherwise appear in the FDConnection.Params property following the
definition of a temporary connection.
If you want to use a connection definition file using a name other than
FDConnectionDefs.ini, or in a location other than the application's current
directory, you simply copy the section or sections of the default ini file and
paste them into your custom ini file. In addition, you either set the
ConnectionDefFileName property of a manually placed FDManager component,
or you set the ConnectionDefFileName property of the automatically created
FDManager at runtime, prior to attempting to connect any of your
FDConnection components.
Assuming that we have copied the NewMSAccess section from the default
connection definition file, and pasted it into a file named conn.ini in the
application's working directory, the following runtime code will configure the
automatically created FDManager prior to connecting an FDConnection:
Of course, if you want to use a fully-qualified filename, you can do that as well.
In some cases, you might want to determine the correct path to the connection
definition file at runtime. In those cases, you must have code that defines the
value of the ConnectionDefFileName property of the FDManager prior to
connecting the first FDConnection.
var
Params: TStrings;
begin
Params := TStringList.Create;
try
Params.Add('Database=' +
'C:\Users\Public\Documents\Embarcadero\' +
'Studio\19.0\Samples\Data\EMPLOYEE.GDB');
Params.Add('User_Name=sysdba');
Params.Add('Server=127.0.0.1');
Params.Add('Pooled=True');
Params.Add('DriverID=IB');
FDManager.AddConnectionDef('IB_EMPLOYEE', 'IB', Params);
finally
Params.Free;
end;
FDConnection1.ConnectionDefName := 'IB_EMPLOYEE';
FDConnection1.Connected := True;
end;
Code: This code can be found in the FDPrivateConnection project of the code
download.
Note: While FireDAC's drivers are dynamically linked into your compiled code,
the use of an ODBC driver assumes that the ODBC driver has been deployed to
the machines on which you will run your applications. In the case of the
Paradox (and dBase) ODBC drivers, those ship with Windows by default. If the
ODBC driver that you intend to use is not part of the default Windows
installation, you must take steps to ensure that the ODBC driver is properly
installed either before you install your application, or as part of installing your
application.
Also, note that nearly all ODBC drivers assume the installation of the local
client library, where necessary. For example, if you are connecting to SQL
Anywhere using an ODBC driver, you need to have both the SQL Anywhere
ODBC driver as well as the SQL Anywhere client library installed on the
machine through which you intend to connect to the database. On the other
hand, most of the Microsoft ODBC drivers for file server databases, such as
Paradox, do not require a client library, and can therefore be connected directly
to those supported files.
http://www.connectionstrings.com/microsoft-paradox-driver-odbc/
But before we start, I want to point out that the application that I am creating
uses the 32-bit Windows ODBC driver for Paradox, which means that my
application will be a 32-bit Windows application. If you need to build a 64-bit
application, you will also need a 64-bit ODBC driver for your database.
Use the following steps to create a connection and execute a query against a
sample Paradox table that ships with Delphi:
1. Place an FDConnection and an FDQuery onto a data module.
2. Right-click the FDConnection and select Connection Editor.
Alternatively, double-click the FDConnection to display the Connection
Editor.
3. Set the Driver ID drop down to ODBC. The FireDAC ODBC driver
parameters appear in the Connection Editor.
4. Use the available dropdown list to set the ODBCDriver parameter to
{Microsoft Paradox Driver (*.db )}. Note: You include the matching
Chapter 2: Connecting to Data 43
curly braces, and there is exactly one space between the '*.db' and the
')}'. If you omit this one blank space, the driver will not work.
5. Set ODBCAdvanced to the following string. This string assumes that
you are using Delphi 10.2 Tokyo, in which the version is 19.0. If you are
using a different version of Delphi, replace the strings in both the
DefaultDir and Dbq parameters with the corresponding sample directory
path. Note, this string cannot include carriage returns or line feeds,
unlike the following string which is formatted for the pages of this book:
DriverID=538;Fil=Paradox 5.X;
DefaultDir=C:\Users\Public\Documents\Embarcadero\Studio\19.0\
Samples\Data;
Dbq=C:\Users\Public\Documents\Embarcadero\Studio\19.0\Samples\
Data;
CollatingSequence=ASCII;
6. Your Connection Editor should now look something like that shown in
Figure 2-14.
7. Click Test. FireDAC will ask for a username, password, and other
information. The sample database that ships with RAD Studio is not
encrypted, so you can skip entering any information and click OK. A
dialog box should confirm that you are connected. Close this dialog box
and then save the Connection Editor by clicking OK.
44 Delphi in Depth: FireDAC
8. Select the FDQuery and make sure that its Connection property is set to
the FDConnection you just configured.
9. Set the FDQuery's SQL property to SELECT * FROM Customer.
10. Next, set the FDQuery's Active property to True.
11. Finally, place one FDPhysODBCDriverLink and one
FDGUIxWaitCursor onto your data module.
Chapter 2: Connecting to Data 45
There, you are now connected to Paradox using FireDAC and the Windows
Paradox ODBC driver. If you now use this data module from a form, and have a
properly configured DBGrid, DBNavigator, and DataSource on this form, and
the DataSource's DataSet property is set to the FDQuery on the data module,
your form should look something like that shown in Figure 2-15.
There is another alternative. From the FireDAC Connection Editor, after you
have set Driver ID to ODBC, you can click the Wizard button to display the
ODBC Select Data Source wizard. The features of this wizard assist you in
selecting or creating an ODBC data source. The ODBC Select Data Source
wizard is shown in Figure 2-16.
46 Delphi in Depth: FireDAC
In the next chapter you will learn how to configure your FireDAC data access
components.
Chapter 3: Configuring FireDAC 47
Chapter 3
Configuring FireDAC
FireDAC consists of a collection of components and classes that implement a
loosely-coupled, multi-layer architecture. You will find evidence of this
structure in many places in FireDAC, and one of the more obvious is in the
design of the connection and dataset classes that provide for the connection to,
and interaction with, underlying databases. This basic hierarchy can be seen in
Figure 3-1.
Figure 3-3: The Options tab of the Connection Editor dialog box
Not only does this dialog box organize the various properties of these shared
objects, but it provides further assistance in assigning values to the properties.
For example, in Figure 3-4 you can see that the FDFetchOptions.Mode and
FDFetchOptions.CursorKind properties are set using a combobox, from which
you can select valid settings.
Chapter 3: Configuring FireDAC 53
Figure 3-4: The Connection Editor dialog box provides support for setting
options
The shared class properties are rich and complex, and it is outside the scope of
this book to provide a complete description of each of them. Instead, these
classes, how their properties are organized, and the individual properties, are
provided here with brief descriptions, which are intended to provide you with a
general orientation to the available options. The help system, as well as the
RAD Studio Wiki, includes more extensive details concerning almost all of the
54 Delphi in Depth: FireDAC
properties listed here. As a result, if you are considering overriding the default
value of a particular property, you should consider consulting the help in order
to get a more complete description of that property.
Before I continue, I have a confession to make. I have not worked with every
database that FireDAC supports. In fact, I have worked with only about a third
of them. Furthermore, as I inspect the wealth of properties supported by these
shared class properties I see some that I am not familiar with or that do not
apply to the databases I have used. As a result, in writing this section I have had
to, at times, rely heavily on Delphi's documentation for descriptions of these
classes and their properties.
Fetch Options
As its name suggests, FetchOptions affects how FireDAC datasets retrieve
information from a database. This property affects a number aspects of data
retrieval, including how many records are returned in a result set, what type of
cursor FireDAC creates on the database, and some of these properties only
apply to particular components, such as an FDSchemaAdapter or an FDTable.
Figure 3-5 shows the expanded FetchOptions property for an FDConnection in
the Object Inspector.
Chapter 3: Configuring FireDAC 55
GENERAL FETCHING
These properties affect how records are retrieved from the underlying database.
Property Description
AutoClose When True, the dataset’s cursor is closed after fetching
records. Default is True. Set AutoClose to False when
an SQL command produces several cursors.
ITEMS TO FETCH
This Items property controls what types of fields to include in a given fetch
operation.
Property Description
Items Contains flags that specify whether to include blob fields, nested
fields, and metadata in a fetch operation. The default is [fiBlobs,
fiDetails, fiMeta].
ITEMS TO CACHE
This Cache property controls what items you want FireDAC to cache.
Property Description
Cache Contains flags that specify whether to cache blob fields, nested
fields, and meta data in internal storage. The default is [fiBlobs,
fiDetails, fiMeta].
MASTER-DETAIL
These properties affect how FireDAC handles master-detail records.
Property Description
DetailCascade When using centralized cached updates, controls
whether changes to the master record will be cascaded
to the associated fields on the detail tables. Default is
False.
Format Options
The TFDFormatOptions class is the base class for all FireDAC FormatOptions
properties. The properties of this class define how the data obtained from your
FireDAC datasets appears to your applications. Figure 3-6 shows the expanded
FormatOptions property for an FDQuery in the Object Inspector.
Chapter 3: Configuring FireDAC 59
Note: Data mapping is typically only required when you employ persistent
fields. It is rarely used with dynamic fields.
Property Description
MapRules MapRules is a collection of FDMapRule objects, each of
which provide FireDAC with information about conversions
that it must perform between source data types and
destination data types. By default, this collection is empty.
Items cannot be added to the MapRules collection unless the
OwnMapRules property is True.
Individual mapping rules are instances of the TFDMapRule class. The following
table contains the names and descriptions of the published properties of this
class.
Property Name
NameMask A string that can include both literal characters, as well as
the SQL wildcard characters % and _, to define the column
name(s) to which this mapping will apply. Useful when
applying the same mask to columns that include common
character combinations. The default is an empty string.
PrecMax Defines the upper limit for a range of data type precision.
PrecMin Defines the lower limit for a range of data type precision.
ScaleMax Defines the upper limit for a range of data type scale.
ScaleMin Defines the lower limit for a range of data type scale.
SizeMax Defines the upper limit for a range of data type size.
SizeMin Defines the lower limit for a range of data type size.
MaxStringSize Defines the maximum length for strings. Longer fields are
returned as BLOB (Binary Large OBject) fields. The
default is 32767.
VALUE PREPROCESSING
These properties control how FireDAC retrieves numeric and date/time values.
Property Description
CheckPrecision Compare the precision and scale of values prior to reading
and writing operations when True. The default is False.
DATASET SORTING
Use these properties to control default sorting options.
Property Description
SortLocale Define the locale ID to configure local sorting. The default is
LOCALE.USER.DEFAULT.
SortOptions Use the sort options flags to define the default local sorting.
Available flags include soNoCase, soNullFirstwise,
soDescNullLast, soDescending, soUnique, soPrimary, and
soNoSymbols. The default is [].
QUOTATION IDENTIFIER
Use this QuoteIdentifiers property to control whether identifiers are enclosed in
quotes in generated SQL statements.
Property Description
QuoteIdentifiers Set to True to enclose identifiers (for example, field names
and table names) in quote characters. The default is False.
Chapter 3: Configuring FireDAC 65
Resource Options
ResourceOptions defines how FireDAC allocates and manages resources. The
ResourceOptions property of FDManager and FDConnection is implemented by
a TFDTopResourceOptions instance, a TFDResourceOptions descendant.
FDCommand and FireDAC's datasets implement ResourceOptions as a
TFDBottomResourceOptions instance, which descends from
TFDTopResourceOptions. Figure 3-7 shows the expanded ResourceOptions
property for an FDStoredProc in the Object Inspector.
66 Delphi in Depth: FireDAC
PERSISTENCE MODE
Use these properties to enable and control dataset persistence.
Property Name
Backup When set to True, FireDAC will create a backup of the
data file before overwriting it during a save operation.
The default is False.
Persistent When set to True, datasets will load and save their data
to a file, rather than the underlying database. The
default is False.
PersistentFileName The name of the file from which the data will be loaded
or to which it will be saved. The default is an empty
string.
COMMAND EXECUTION
These properties influence query and stored procedure execution.
Property Name
ArrayDMLSize The maximum number of records to update in a single
slice of a batch operation using array DML. The default
is 2,147,483,647.
CONNECTION RESOURCES
Use these properties to configure how FireDAC manages connection resources.
Property Name
AutoConnect Set to True to cause a connection to automatically
connect when a dataset needs a connection. The default is
True.
70 Delphi in Depth: FireDAC
Update Options
Update options permit you to configure various aspects of FireDAC's ability to
edit the data and write data back to an underlying database. The UpdateOptions
property of both FDManager and FDConnection is a TFDUpdateOptions
instance, while the UpdateOptions properties of FDCommand and other
FireDAC datasets is a TFDBottomUpdateOptions instance, a descendant of
TFDUpdateOptions. Figure 3-8 shows the expanded UpdateOptions property for
an FDCommand in the Object Inspector.
Chapter 3: Configuring FireDAC 71
GENERAL UPDATING
Use these properties to configure which operations are acceptable for a dataset.
Property Name
EnableDelete Set to True to permit records to be deleted from datasets. The
default is True.
EnableInsert Set to True to allow records to be added to datasets. The
default is True.
LOCKING
Use these properties to configure how FireDAC locks records.
Property Name
LockMode Defines how FireDAC locks records. Possible values are
lmNone, lmPessimistic (if supported), and lmOptimistic. The
default is lmNone.
REFRESHING
Use these properties to configure how FireDAC refreshes data after posting
changes.
Property Name
RefreshMode Defines how FireDAC refreshes records after updates or
inserts. Possible values are rmManual, rmOnDemand, and
rmAll. The default is rmOnDemand.
RefreshDelete Set to True to have FireDAC remove from local storage any
records not found following a refresh. The default is True.
AUTOMATIC INCREMENTING
Use these properties to control how FireDAC works with auto-increment fields
and generators.
Property Name
AutoIncFields A comma-separated list of auto-increment fields. Note
that FireDAC may automatically recognize those fields
when fiMeta is in FetchOptions.Items and for some
DB’s connection parameters, such as
ExtendedMetadata=True. The default is any empty
string.
FetchGeneratorsPoint Controls the point at which a new value is obtained
from a generator. Possible values are gpNone,
gpImmediate, and gpDeferred. The default is
gpDeferred.
POSTING CHANGES
Use these properties to configure how FireDAC applies changes to the database.
The properties whose names begin with Check affect the associated properties
of the dataset's TFields. The remaining properties affect the associated
properties of the datasets themselves.
Property Name
AutoCommitUpdates When set to True, automatically commit updates after
posting updates when cached updates is True.
Otherwise a manual call to CommitUpdates call is
required. The default is False. This property was
introduced in Delphi 10 Seattle.
Transaction Options
Use the properties of the TFDTxOptions class to configure how FireDAC
manages transactions. Only the FDConnection and FDTransaction classes have
a transaction object property, named TxOptions and Options, respectively.
Figure 3-9 shows the expanded Options property for an FDTransaction in the
Object Inspector.
76 Delphi in Depth: FireDAC
ISOLATION LEVEL
Use this Isolation property to control transaction isolation.
Property Name
Isolation Set the transaction isolation level, which dictates how changes
made during a transaction are viewed by other users
(connections). Possible values are xiUnspecified, xiDirtyRead,
xiReadCommitted, and xiRepeatableRead. The default is
xiReadCommitted.
UPDATE ABILITY
Use this ReadOnly property to define a read-only transaction.
Property Name
ReadOnly Set to True to indicate to the database that the transaction will
only read data from the database. The default is False.
Chapter 3: Configuring FireDAC 77
AUTOMATIC COMMITTING
Use these properties to configure the automatic committing of transactions.
AutoStart, AutoStop, and StopOptions are specifically intended for InterBase
and FireBird, both of which do not support automatic transaction management.
Property Name
AutoCommit When set to True, a transaction is automatically started
before executing an SQL statement, and upon completion is
automatically committed or rolled back. The default is True.
DBMS-SPECIFIC PARAMETERS
Use this Params property to define parameters specific to a particular database.
Property Name
Params A collection of name/value pairs and names used to define
additional transaction parameters. The default is an empty
collection.
78 Delphi in Depth: FireDAC
DISCONNECTION ACTION
This DisconnectAction property defines the default action to take when a
transaction is complete.
Property Name
DisconnectAction Set to define the default action to take when a connection
is being disconnected and a transaction is still active.
Possible values include xdNone, xdCommit, and
xdRollback. The default is xdCommit.
NESTING
Use this EnableNested property to enable or disable nested transactions.
Property Name
EnableNested Set to True to enable nested transactions. When enabled,
calling StartTransaction from within an active transaction
context initiates (or emulates) a nested transaction. The
default is True.
Understanding UpdateOptions.UpdateMode
In the opening chapter of this book, I stressed the similarities between the
FireDAC and the BDE. Specifically, that you can replace a TTable or TQuery
with a TFDQuery and most of your code will work nicely without adjustment.
There is one difference, however, that is significant, and it's related to the
default value of a property in FireDAC whose value is different from a similar
property found in the BDE. And because the effects of this property are so
profound it is important that you know about this difference when you decide to
start using FireDAC. The property is UpdateMode.
UpdateMode is a property of TFDUpdateOptions, which is the type associated
with the UpdateOptions property found in a number of classes in the FireDAC
framework. These include FDManager, FDConnection, FDQuery,
FDMemTable, FDTable, FDCommand, and FDSchemaAdapter.
The UpdateMode property in the BDE and the UpdateOptions.UpdateMode
property are both of the type TUpdateMode. The following is the declaration of
the TUpdateMode type:
Chapter 3: Configuring FireDAC 79
this default value, there is a good chance that some users may report that their
changes have disappeared.
Here is how this can happen. If two or more users read the same record (by way
of a query, stored procedure call, or by opening an FDTable), and two or more
users post a change to that record, the last user to post their record will replace
those changes posted before them. What's problematic about this is that the
users who posted before the last user will have no idea that their changes have
been overwritten.
By comparison, most databases use either pessimistic locking (the first user to
start editing the record prevents any other user from editing the record until
changes have been posted), or optimistic locking (once the first user posts a
change to a record, subsequent attempts to post to that same record will fail,
since the original record no longer can be found, based on the WHERE clause
predicates, the Boolean expressions in an SQL WHERE clause). In these
scenarios, the first user to post wins, and other users must first re-read the
record, after which they can decide whether or not to update the newly posted
contents.
FireDAC defaults to an UpdateMode of upWhereKeyOnly, since the queries
required to update the database tend to execute faster. It is up to you, however,
to decide whether or not the performance improvement is more important than
the possible loss of data.
The DataSetProvider class, the class which ClientDataSets use to resolve
changes back to the underlying database, and BDE datasets (TTable, TQuery,
and TStoredProc) also have an UpdateMode property. For these objects, the
default value of UpdateMode is upWhereAll, the conservative setting that
prevents a user from overwriting another user's edits.
So, the bottom line is this. You need to understand how FireDAC's
UpdateOptions.UpdateMode affects how records are updated in the underlying
database, and set this property to the value that meets the needs of your
application.
In the next chapter, I cover the basics of data access with FireDAC.
Chapter 4: Basic Data Access 81
Chapter 4
Basic Data Access
In Chapter 1, you learned about the many features that make FireDAC the data
access mechanism of choice. In Chapter 2, you learned how to make connection
to your database, and in Chapter 3, you discovered that FireDAC is highly
configurable, offering a wealth of options for fine-tuning your data access. Now
it's time to get serious about working with the actual data in your database.
This chapter is designed to introduce you to the basics of data access with
FireDAC. I will begin by showing you how to retrieve and navigate the data in
your database using your user interface. I will also introduce the use of the three
primary FireDAC datasets — FDTables, FDQueries, and FDStoredProcs.
Let's begin by creating a very simple example, one where we can display the
contents of a table in a grid. These steps, and nearly all of the remaining
examples in this book assume that you have InterBase installed, which should
have happened automatically when you installed Delphi. If you do not have
InterBase installed, you should download the free developer's edition and install
it before you continue.
Note: If you cannot open the EMPLOYEE name connection from the Data
Explorer, as described in the following steps, please visit Appendix A for more
information about connecting to InterBase. You may simply need to start the
InterBase server from the Services applet.
Use the following steps to easily create an application that permits you to view
and edit the employee table in the employee.gdb database that is installed along
side with InterBase:
1. Create a new project in Delphi by selecting File | New | VCL Forms
Application from Delphi's main menu.
2. Using the Tool Palette, place a DBNavigator, a DBGrid, and a
DataSource onto the design surface of your form. If you are unfamiliar
with the Tool Palette, you can start typing into the field at the top (next
to the magnifying glass icon) the name of a component you want to
82 Delphi in Depth: FireDAC
place. For example, start typing DataSource, and the Tool Palette will
progressively filter the available components until just the DataSource is
displayed. Once visible, drag the DataSource onto your form's design
surface and release it. When you are done placing components your form
should look something like that shown in Figure 4-1.
3. From Delphi's main menu select View | Data Explorer to display the
Data Explorer pane. This pane should appear in the upper-right corner of
your designer, by default.
4. From the Data Explorer, expand the FireDAC node, and then expand the
InterBase node, and then the EMPLOYEE node under the InterBase
node. Finally, expand the Tables node. Your Data Explorer should now
look something like that shown in Figure 4-2.
5. Now click on the CUSTOMER node under Tables, and hold down your
left mouse button while dragging the CUSTOMER table to your form
and release the button. Delphi responds by placing an FDConnection
named EmployeeConnection along with an FDQuery named
CustomerTable onto your design surface.
Chapter 4: Basic Data Access 83
You can now run this form by pressing F9, or by clicking the Run button on
Delphi's Toolbar, or by select Run | Run from Delphi's main menu. The running
form is shown in Figure 4-4.
You can now click on the buttons of the DBNavigator to navigate and edit the
records shown in the grid, and can also scroll the grid using the scroll bar or
your navigation keys on your keyboard. Furthermore, if you edit a cell in the
grid, giving it a valid value, you can post that record to the underlying database
either by clicking the Post button on the DBNavigator (the button with the check
mark icon displayed on it), or by navigating off of the edited record.
Most of your applications will be more complicated than this one, but this
example demonstrates how easy it is to create an application that can read and
write data from your database using FireDAC. And in this preceding example,
all of our interactions with the data, including editing and navigation, was
performed through the provided user interface. In the following section I will
take a more in depth look at interacting with data through the user interface.
Later in this chapter, I will discuss the three primary FireDAC datasets in
greater detail. For information on programmatic navigation and editing, refer to
Chapter 6, Navigating and Editing Data.
As you can see in this illustration, some of the buttons are not enabled. This
DBNavigator is associated with an active DataSet, and the enabled properties of
the buttons are context sensitive. For example, you can tell by this image that
the current record is the first record in the dataset, since the First and Prior
buttons are not enabled. Furthermore, the dataset is in the Browse state, since
the Edit button is active, and the Cancel and Post buttons are not active.
You can control which buttons are displayed by a DBNavigator through its
VisibleButtons property. For example, if you are using the DBNavigator in
conjunction with a FireDAC FDMemTable that reads and writes its data from a
Chapter 4: Basic Data Access 87
local file, you will want to suppress the display of the Refresh button, since
attempting to refresh an FDMemTable that uses local files makes no sense.
Figure 4-5 shows this property editor expanded, permitting you to toggle which
buttons you want to be available.
Figure 4-5: Use the Object Inspector at design time to control which
buttons are visible in the DBNavigator
Another DBNavigator property whose default value you may want to change is
ShowHint. The glyphs used for the various buttons of the DBNavigator are not
necessarily obvious to all (take the Edit button, for example). To improve this
situation, setting ShowHint to True supplements the glyphs with popup help
hints, as shown in the following illustration:
88 Delphi in Depth: FireDAC
You don't even have to accept the default hints offered by the DBNavigator.
Using the Hints property editor, which is a StringList editor, you can supply
whatever text you want for each of the buttons in your DBNavigator. To do this,
you change the text associated with the corresponding button in the
DBNavigator based on the position of the button in the DBNavigator. The
StringList editor, with alternative DBNavigator button hints, is shown in Figure
4-6.
Figure 4-6: Use the Hints property editor, in conjunction with the
ShowHint property, to provide custom hints for a DBNavigator
Before leaving this topic, I want to mention an often overlooked gem in Delphi.
The Fields Editor, which will be discussed in more detail in later chapters in this
book, includes a small navigator control that you can use to navigate an active
dataset at design time.
This can be seen in Figure 4-7, where a form that includes a DBNavigator and a
DBGrid are associated with an active Table. Notice that the small navigator at
the top of the Fields Editor and the DBNavigator are both indicating that the
dataset is neither on the first record nor the last record (since First, Next, Last,
Chapter 4: Basic Data Access 89
and Prior are all enabled), which is confirmed by the small arrow indicator in
the DBGrid. Before this figure was captured, the Fields Editor navigator had
been used to advance to the third record in the dataset.
Figure 4-7: The Fields Editor has a handy little navigator that you can use
to navigate records of an active dataset at design time
In Figure 4-8, the Add All Fields button of the Columns property editor has
been clicked. These columns can then be selectively deleted, or their properties
adjusted by selecting a specific Column in the Columns Editor and then using
the Object Inspector to change its properties:
Depending on which multi-record control you are using, you can navigate
between records using UpArrow, DownArrow, Tab, Ctrl-End, Ctrl-Home,
PgDn, PgUp, among others. These keypresses may produce the same effect as
clicking the Next, Prior, Last, First, and so on, buttons in a DBNavigator. It is
also possible to navigate the records of a dataset using the vertical scrollbar of
these controls.
How you edit a record using these controls also depends on which type of
control you are using as well as their properties. Using the default properties of
these controls, you can typically press F2 or click twice on a field in one of
these controls to begin editing.
Posting a record occurs when you navigate off an edited record. Inserting and
deleting records, depending on the control's property settings, can also be
achieved using Ins and Ctrl-Del, respectively. Other operations, such as Refresh,
are not directly supported. Consequently, in most cases, multi-record controls
are combined with a DBNavigator to provide a complete set of record
management options.
Navigation and LiveBindings
LiveBindings, which were originally introduced in Delphi XE2, are specialized
classes that associate string expressions, which are evaluated at runtime by
Delphi's expression engine, with a property of a class. The expression that is
evaluated often includes data from another class, obtained by reading a property
92 Delphi in Depth: FireDAC
or executing a method (and can even include reading multiple properties and/or
executing several methods). In the end, the function of a LiveBinding is to
assign data associated with one class to another, making the target class data-
aware (we could also use the term data binding here, but for our purposes, the
terms data bound controls and data-aware controls are interchangeable).
LiveBindings were introduced in order to provide data awareness in
FireMonkey components, which were not designed to support the data
awareness implemented via DataLinks. However, LiveBindings are also
available in the VCL, and can be used to implement data awareness in almost
any class, not just those that support DataLinks.
DataLinks and LiveBindings differ in several additional ways. One of the most
obvious is that, unlike DataLinks, LiveBindings are not encapsulated in some
other control. Instead, LiveBindings are standalone classes. It is through the
configuration of a LiveBinding that the expression that gets assigned to the
target component's property is defined and the source and target components are
selected (though through the LiveBindings Designer this process is mostly
transparent).
The second major difference between DataLinks and LiveBindings is that
DataLinks wire a data-aware control to a DataSource. LiveBindings, by
comparison, access the underlying DataSet through a BindSource, which is
nearly always an instance of the BindSourceDB class. BindSourceDB classes
are designed to make the Fields and other relevant data about their associated
datasets available to the LiveBinding and the expression engine through
properties.
Similar to the VCL, there are two general navigation-related mechanisms
associated with LiveBindings: The BindNavigator and position-related
LiveBinding expressions. Let's start with the BindNavigator.
The BindNavigator
Technically speaking, the BindNavigator control is a FireMonkey control, and is
not a LiveBinding. Consequently, it is not available for use with VCL
applications.
The BindNavigator, like the DBNavigator, is a video player-like control that
you can hook to a BindSourceDB in order to control record navigation and
editing. The BindNavigator is shown in the following illustration:
Chapter 4: Basic Data Access 93
Figure 4-10: Drag from the FireDAC dataset to the BindNavigator to form
an association
94 Delphi in Depth: FireDAC
provide for the user's interaction with the control, which in turn affects the
dataset.
For example, with a StringGrid connected to a BindSourceDB by way of
LiveBindings, when the end user presses Ctrl-End to navigate to the last record
displayed in the StringGrid, a PosSource LiveBinding expression (which was
automatically created when you bound the StringGrid to the BindSourceDB)
triggers in response to the navigation. This LiveBinding expression is shown in
the LiveBindings Expression Editor shown in Figure 4-12. This expression
instructs the LiveBinding to set the current record of the dataset to which the
BindSourceDB points to either 1, or 1 plus the currently select record in the
StringGrid (the StringGrid is zero-based, while the dataset RecNo property is 1-
based).
PosControl, by comparison, goes the other way. In other words, changes to the
current record of the dataset cause the LiveBinding to trigger and assign the
96 Delphi in Depth: FireDAC
current record of the StringGrid to match that of the dataset's current record.
This can be seen in Figure 4-13.
Understanding FDTable
FireDAC's FDTable is a dataset component that provides a component-level
equivalent of the BDE TTable. While it is simple to use, it is far more capable
than the BDE version. Specifically, in addition to the navigating, editing, and
searching capabilities provided by the TTable, FDTable also permits filtering,
filtered navigation, cached updates, aggregation, and persistence. I will talk
about these more advanced features in some of the later chapters of this book.
While FDTable is easy to use, most FireDAC users prefer to use the FDQuery.
FDQueries supports all of the advanced features available through the
FDTables, but bases its initial data selection on an SQL query, which is the
common query language for most remote database management systems
(RDBMS). While it is true that an FDTable also bases its data selection on an
SQL query, it is a basic SELECT * FROM query, whereas with an FDQuery the
SQL is of your design, which means it can include joins, sub-queries,
aggregation, or any of the other elements that make SQL such a strong language
for data selection and manipulation. Nonetheless, since the FDTable is the
simplest of the FireDAC datasets, I will begin by demonstrating its use.
Configuring an FDTable
Configuring an FDTable to retrieve data from a database table could not be
easier. All you need is to set the FDTable’s Connection property to an active
connection to a database, after which you point the FDTable's TableName
property to the table whose data you want to work with. This is demonstrated in
the following steps. Since these steps are similar to those given earlier in this
chapter, I will keep them brief:
1. Select File | New | VCL Forms Application from Delphi's main menu.
2. From the Tool Palette, place a DBNavigator, a DBGrid, and a
DataSource onto the form and adjust their placement so that the
DBNavigator is at the top and the DBGrid occupies most of the space
below it. Also place an FDGUIxLoginDialog on the form as well. (You
might also need to place an FDPhysIBDriverLink and
FDGUIxWaitCursor on your form, depending on the version of Delphi
you are running.)
3. From the Data Explorer, select the node for the EMPLOYEE database
onto the form designer. This will create an FDConnection named
EmployeeConnection. If you accidentally drag-dropped the node of the
employee table, you will get both an FDConnection named
98 Delphi in Depth: FireDAC
8. Set the Active property of the FDTable to True. The DBGrid and the
DBNavigator should now be active, and your form may look something
like that shown in Figure 4-14.
can set Active to True to execute the query. On the other hand, if you have
assigned an SQL statement that creates a table, inserts a record, or deletes
records, at design time you can right-click the FDCommand and select Execute.
For those FireDAC datasets that return a result set, if you close the project and
open it again, that query will execute again once the dataset has been created.
For example, if you have an FDQuery whose Active property is True, and you
have saved that project, the next time you open the form or data module on
which that query appears, by default the query will execute once it is created.
Executing DataSets at Runtime
In practice, you typically do not want your datasets to automatically open,
instead opting to have them to open only when you need either the result set
they return or the effects they produce, such as creating a new table or removing
unwanted records. For those datasets that do not execute automatically, how you
make them execute depends on whether or not they return a result set.
For those FireDAC datasets that return a result set, you execute the dataset
programmatically by setting Active to True or by calling the Open or Execute
method. If you are not sure whether or not a result set is returned, you can call
OpenOrExecute.
For FireDAC datasets that do not return a result set, you call a method
appropriate for the particular FireDAC component you are using. For example,
with a FireDAC FDQuery, you call ExecSQL. When using an FDStoredProc,
you can invoke ExecProc or ExecFunc, depending on whether the
FDStoredProc returns no data or a scalar value, respectively.
When Should You Connect?
For those FireDAC datasets that return a result set, being able to set Active to
True at design time is a real advantage. It permits you to configure data-aware
controls in a realistic setting, and lets you see your data as you design your
forms. On the other hand, in most cases you want to control when your datasets
become active at design time. For example, imagine that you have a data
module with 50 FireDAC FDQueries on it. The last thing you want to happen is
have all 50 queries execute as part of the data module creation. That might take
a long time. It is much better, both resource- and performance-wise, to make
your FireDAC datasets active only when you need their data.
Even more to the point, in many cases you will not know what data you want
until the user has interacted with your application. For example, you might have
set up some dummy parameters in order to test a parameterized query at design
time, but it really doesn't make sense to execute the runtime query until you get
Chapter 4: Basic Data Access 101
the data you want to use for the parameters from the user at runtime. Executing
the query automatically using the dummy parameters at runtime is a complete
waste of resources.
For Delphi developers using a data access framework other than FireDAC, this
means remembering to set Active to False on each of your datasets before
compiling your quality control or release build. By doing that, however, you
will need to once again set Active to True the next time you are working with
your datasets at design time.
Fortunately, FireDAC has a very nice solution. Using the ActiveStoredUsage
property of FDConnections and FireDAC datasets, you can define the
circumstances under which the Active property is saved. When you expand the
ActiveStoredUsage property in the Object Inspector, you can select to restore
the value of the Active property either at runtime or at design time.
If you enable the auDesignTime flag, and Active is True, each time the
FireDAC component is created at design time, its Active property will be once
again set to True. The same rule applies for the auRunTime flag. So, if you want
to automatically execute your connection or query at design time, but leave the
connection or query closed at runtime (until you are ready to open it), enable the
auDesignTime flag but leave the auRuntime flag disabled. It's a beautiful
solution to an otherwise nagging problem.
102 Delphi in Depth: FireDAC
4. With the SQL Command tab selected, enter the following SQL statement
into the provided memo field:
SELECT * FROM Employee
Click the button labeled Execute, which you will find to the right of the
memo field, in order to execute the statement that you entered. Your
Query Editor should now look something like that shown in Figure 4-15.
5. Click OK to save the query to the FDQuery's SQL property and to close
the Query Editor.
6. Set the DataSet property of the DataSource to point to your newly added
FDQuery.
7. Finally, set the Active property of the FDQuery to True. Your form
should now look similar to that shown in Figure 4-16.
These results do not look any different than those produced by the FDTable, and
in this case, the only difference is that LDW is not an option since it is not
supported by FDQueries. On the other hand, although we specified an SQL
statement that is the same as that which was generated by the FDTable
(SELECT * FROM Employee), we could have provided any valid SQL in order
to get our result, or even provided a DDL (data definition language) statement
that doesn't generate a result set at all. In other words, we exerted more control
over the query operation with minimal effort. It is for this reason that most
Delphi developers prefer using FDQueries over FDTables, in most cases.
In that case, you use an FDQuery and the SQL statement will be a stored
procedure invocation similar to the one shown above (although if the stored
procedure does not return a result set or value, you invoke ExecSQL or
OpenOrExecute instead of setting Active to True). The other option is to use an
FDStoredProc component, as shown in the following steps:
1. Using the existing Delphi project that you created earlier in this chapter,
and which you modified in the preceding section, let's modify it further
to use a stored procedure.
2. Begin by setting the FDQuery's Active property to False.
3. Next, add an FDStoredProc component to the main form of this
application. It should automatically set its Connection property to the
EmployeeConnection FDConnection. If it does not, set the stored
procedure's Connection property to the FDConnection.
4. With the FDStoredProc selected, use the drop-down list on the
StoredProcName property to display a list of defined stored procedures
available in the employee.gdb database, as shown in here.
Chapter 4: Basic Data Access 107
Figure 4-17: A stored procedure call returns data from the ORG_CHART
stored procedure from the EMPLOYEE database
Chapter 5
More Data Access
In the preceding chapter, you learned how to access your database tables using
FDTables, FDQueries, and FDStoredProc components. This chapter extends this
discussion with a look at parameterized queries and stored procedures. I
continue this discussion of working with data by looking at several additional
classes that you may want to employ in your database applications. These
include the FDUpdateSQL component, which simplifies the process of
customizing updates to database tables, FDCommands, which are used within
other FireDAC datasets, and FDTransactions, which you can use to ensure that
your updates are completed in an all-or-none manner. I also talk about
asynchronous versus synchronous query execution, as well as the FireDAC
Monitor utility.
Let's begin with parameterized queries and stored procedures.
In the first example, the query will always return data associated with records
where the Emp_No field contains the value 2. Assuming that a valid value has
been assigned to the parameter in the second query, this query will return data
110 Delphi in Depth: FireDAC
associated with any record whose Emp_No field matches the value assigned to
the parameter. The first example is static, while the second is dynamic.
The Advantages of Parameters
Parameters serve a number of purposes. These include permitting you to write
flexible queries that can be customized prior to their execution, improving
performance when many different yet similar queries need to be executed, and
preventing the injection of potentially damaging SQL statements into the queries
that you execute.
Before continuing, it is worth considering each of these advantages in greater
depth.
GREATER FLEXIBILITY
Simply by introducing a single parameter, you turn a static query, one that
returns a similar result each time it is executed, into a flexible query whose
result sets can be changed merely by changing the value of a parameter. You
might notice that I did not say that static queries always return the same result
sets, because in most cases they don't. The data they return is based on the data
contained in the database, and most databases change day by day, if not second
by second. As a result, the same static query is not expected to return the same
result set each time.
However, static queries do return the same information, even though the actual
values may change. By comparison, a parameterized query may return very
different data, depending on the values of the parameters employed.
A simple example makes this obvious. Imagine a query that returns detailed
information about a particular client's purchases. Assuming that the parameter
defines which client the data pertains to, not only do the results change, but the
interpretation changes as well. A static query may return information about one
client's purchases, and those change as the client makes additional purchases,
but changing the client means that entirely different data is being returned.
When you consider that a parameterized query may include many different
parameters, it is easy to see why parameterized queries provide flexibility well
beyond that afforded by static queries.
IMPROVED PERFORMANCE
Most database servers perform an analysis of a query prior to its execution,
creating an execution plan to optimize what it's being asked to do. When a query
is parameterized, this preparation needs to be performed only once, prior to the
first time the query is executed. So long as that query has not been unprepared,
Chapter 5: More Data Access 111
the values of the parameters can be changed and the query can be executed
again without requiring a new execution plan.
The time saved by reusing the execution plan is significant, especially if the
query is one that is being executed hundreds, or even thousands of times, which
is what often happens during large reporting operations. Any operation where a
parameterized query is executed repeatedly, albeit with different values in the
parameters, produces superior performance to the sequential execution of
different static queries.
PREVENTION OF SQL INJECTION
When accepting input from the user interface for use in a predicate of an SQL
statement, it is essential that you employ a parameter to hold that data, rather
than concatenating the user input into a string that you execute. Most database
servers permit SQL queries to include two or more individual SQL statements
separated by a special character, most often a semicolon. If you construct a
query string at runtime by concatenating literal SQL statements with data input
by a user, there is a possibility that a user with a knowledge of SQL could
exploit this feature to undermine your database. This is called SQL injection.
Here's an example. Imagine that your client application includes a query that is
constructed at runtime based on the user's entry of a Customer ID. Consider the
following code, which wrongly builds an SQL statement based on the customer
ID that the user enters into an Edit in your user interface:
Here, the value entered into the Edit is concatenated to the query being assigned
to the SQL property of the FDQuery. Now consider what would happen if the
user enters the following data into the edit:
Assuming that the // characters are comment identifiers, making everything that
follows them in the SQL statement appear to be a comment, the resulting query
112 Delphi in Depth: FireDAC
would actually be an SQL script (two SQL statements), and would look like the
following:
In this case, if the user typed 1001;DROP TABLE CUSTOMER;// into the Edit,
the resulting query would actually try to select a customer whose CustomerID is
1001; DROP TABLE CUSTOMER;//, and such a query would likely produce a
null result set. More importantly, it would cause no damage to any of the
database tables.
Defining Parameters at Design Time
Both the FDQuery and FDStoredProc components have a Params property, and
these properties are of the type TFDParams, a collection of TFDParam items. So
long as the ParamCreate flag is set in the ResourceOptions property of the
associated dataset (the default), FireDAC will create one TFDParam instance for
each parameter defined in the SQL property of the FDQuery component, as well
as one for each corresponding parameter in the stored procedure whose name
has been assigned the StoredProcName property of an FDStoredProc
component.
For those parameters that are input parameters, you can select the property and
assign data to its Value property. For example, consider the following query:
Chapter 5: More Data Access 113
After entering this query into the SQL property of an FDQuery, if you open the
Params collection editor from the Object Inspector you will see the named
parameter, as shown here:
Select the parameter named DN, and the Object Inspector displays the properties
of the TFDParam. Enter 623 into Value to define that you want to restrict the
selection to employees from department 623, as shown here:
114 Delphi in Depth: FireDAC
If you now set the Active property of this query to True, and your FDQuery is
connected to a DataSource which in turn is connected to a DBGrid, you will get
a result set that looks similar to that shown in Figure 5-1.
Figure 5-1: The Parameter of a parameterized query has been set at design
time, and the query has been executed
If you are working with a parameterized stored procedure, the process is very
similar. Consider the Params collection shown in the following illustration,
which is displayed if you set an FDStoredProc component's StoredProcName
property to Mailing_Label in the employee.gdb database. Here you see seven
parameters. The first parameter, CUST_NO, is an input parameter, and you can
assign a value to it at design time. The remaining six parameters are output
parameters, and they will get values only after you execute this stored procedure
with a valid value assigned to the CUST_NO named parameter (and these
values might be null, depending on the value of the input parameter).
Chapter 5: More Data Access 115
SELECT
(SELECT e.Full_Name FROM Employee e
WHERE e.Emp_No = s.SALES_REP) as SalesRep,
c.Contact_FIRST || ' ' || c.Contact_Last as Customer,
s.Order_Date,
s.Qty_Ordered as Quantity,
s.Total_Value as Total
FROM SALES s
INNER JOIN Customer c on c.Cust_No = s.Cust_No
WHERE c.Cust_No = :cn
AND s.Total_Value > :v;
Figure 5-2: A query with two parameters appears in the FireDAC Query
Editor
Figure 5-3: FireDAC has identified the named parameters from the query,
and permits you to configure them in the FireDAC Query Editor
In most cases, it is not necessary to provide the Param type and Data type as the
Value property of the TFDParam is a variant, but you can provide that
information if you want, or under those conditions where FireDAC cannot
correctly resolve the type of your parameters. In this case, it is sufficient to
simply provide a value for each parameter. Enter 1001 in the Value field for the
CN parameter, and 20000 in the Value field for the V parameter. If you then
click Execute, you will get the result set you see in Figure 5-4.
118 Delphi in Depth: FireDAC
Figure 5-4: The FireDAC Query Editor has bound the two parameters to,
and executed, the query
If you now click the OK button, the FireDAC Query Editor will not only save
the SQL statement to the FDQuery's SQL property, but it will also save the
values of the parameters to the FDQuery's Params collection.
SELECT
(SELECT e.Full_Name FROM Employee e
WHERE e.Emp_No = s.SALES_REP) as SalesRep,
c.Contact_FIRST || ' ' || c.Contact_Last as Customer,
s.Order_Date,
s.Qty_Ordered as Quantity,
s.Total_Value as Total
FROM SALES s
INNER JOIN Customer c on c.Cust_No = s.Cust_No
WHERE c.Cust_No = :cn
AND s.Total_Value > :v
There are two parameters here, and they are named parameters. To assign the
parameters by name, you can use the ParamByName method, to which you pass
the name of the parameter. This method returns an FDParam instance, and you
use it to assign a value to the parameter. FDParam instances have a variety of
properties such as AsString, AsInteger, and so forth (similar to Fields) to which
you can assign the value of the parameters. However, the Value property, which
is of type variant, is almost always sufficient to handle parameter assignment.
The following is a code segment that demonstrates how to assign data to the
named parameters:
There are a couple of comments that I have about this code. First of all, the
parameter names are case insensitive, so it doesn't matter if you reference them
using Cn, cN, CN, or cn. It's all the same to FireDAC. Also, it does not matter in
which order you assign values to the parameters. The only thing that counts is
that each parameter has a value before you attempt to execute the query.
As mentioned, even when you use named parameters you can assign parameter
values to your FireDAC datasets by position. In short, the position of a
parameter for named parameters is the order in which the parameter names
appear in the query. In the preceding SQL statement, the first parameter
(position 0) is associated with the name cn, and the second parameter (position
1) is associated with the v parameter. As a result, the following code performs
the same parameter assignment as that shown in the preceding code sample:
Notice that I didn't assign values to the parameters in order. I did that on purpose
just to emphasize that order is not relevant, just so long as each parameter has a
value prior to your opening the dataset.
Since positional parameters do not have names, but instead use some sort of
marker, most often the ? character, you can only assign these parameters based
on position. For the same reason, you must always provide one value for each
positional parameter marker that appears in a query, while when using named
parameters you assign values by position based on the position of the first
instance of a given name in the query. For example, consider the following
query, which uses positional parameters:
SELECT
(SELECT e.Full_Name FROM Employee e
WHERE e.Emp_No = s.SALES_REP) as SalesRep,
(SELECT c.Contact_FIRST || ' ' || c.Contact_Last
FROM Customer c WHERE c.Cust_No = ?) as Customer,
s.Order_Date,
s.Qty_Ordered as Quantity,
s.Total_Value as Total
FROM SALES s
WHERE c.Cust_No = ?
Chapter 5: More Data Access 121
This query requires three parameters, even though the first two parameters are
likely to be the same value (the same customer number). Granted, I could have
written this query in a way that would have required only two parameters, but
the point is valid. When using positional parameters, you must provide a value
for each positional parameter marker.
So, in this case, the parameter value assignment and query execution would look
something like the following:
Assigning parameters to datasets that do not return a result set uses the same
approach as shown here. The only difference is that you will not use the Active
property or the Open method to execute the dataset. Instead, you will call the
appropriate Exec... method. For queries, this is ExecSQL or OpenOrExecute,
and for stored procedures, you will invoke ExecProc, ExecFunc, or
OpenOrExecute.
event handler to the query, and basically do the same thing as in the first option,
though with some additional help. (OnUpdateRecord is discussed in detail in
Chapter 16, Using Cached Updates.) The third option is to use FireDAC's
FDUpdateSQL component.
Defining the Query
Here is a rather simple example, but it demonstrates the value of FDUpdateSQL.
Let's begin with a query that is clearly not updateable:
Here's the story. Imagine that you manage a shipping company, and you need to
review shipping orders at the end of each day to ensure that the orders are
accurate. Since in the past you have found that most common error is the input
of the city name (both State_Province and Country are validated by a table, but
City is not), you need to fix misspelled city names, regardless of how many
times an invalid city name was entered.
This requirement, to update a field, in possibly more than one record, from a
query that uses aggregation and grouping, violates a number of the restrictions
imposed by FireDAC and the default values of the UpdateOptions property.
Fortunately, the FDUpdateSQL component permits you to overcome these
restrictions, and this is demonstrated in the FDUpdateSQLDemo project, whose
main form is shown in Figure 5-5.
Chapter 5: More Data Access 123
Code: You can find the FDUpdateSQLDemo project in the code download. See
Appendix A for details.
Note: There were no duplicate cities in the Customer table of the original
employee.gdb database. Because I felt it important to demonstrate the use of the
UpdateOptions.CheckUpdatedRecords property, and the ability to project an
update onto two or more records, I added an extra record to the database — a
customer whose city and country fields held the values Den Haag and
Netherlands, respectively.
To begin with, you need to configure your FDQuery with both the query, as well
as the UpdateOptions that will permit you to make changes. Specifically, once
you have the preceding query assigned to the FDQuery, its default options
would otherwise prevent any editing, since FireDAC cannot figure out what you
are trying to accomplish:
124 Delphi in Depth: FireDAC
1. Using the FireDAC Query Editor, enter the preceding query into the
SQL Command field of the query editor.
2. Click Execute to verify that you have entered the query correctly and
FireDAC can retrieve your aggregate query. Your FireDAC Query
Editor should look like that shown in Figure 5-6.
Figure 5-6: A readonly query has been executed using the FireDAC Query
Editor
1. Click on the Options tab of the FireDAC Query Editor and scroll to the
UpdateOptions section.
2. Expand the General Updating group. Since we only need to update the
City field of the query result, uncheck the Enable insert and Enable
delete checkboxes. Leave Enable update checked.
3. Next, expand the Posting Changes group. Leave Update mode set to
upWhereKeyOnly. However, we do not want FireDAC to enforce the
readonly fields flag or to check update records count. This query is a
readonly query, and as a result, all fields will be seen internally as
readonly. By unchecking the Check "readonly" field flag, we are telling
FireDAC to ignore this information and let us make the changes we
want.
4. Normally, FireDAC will raise an exception if an update affects more or
less than one record. In this case, though, we want to correct all
misspelled city names, regardless of how many times the misspelling
occurred. To do this, uncheck Check updated record count.
Tip: When I first created this example, the Check updated records count option
was checked and disabled on the Options page of the FireDAC Query Editor. As
a result, I could not control this property from the Options page of the FireDAC
Query Editor. Instead, I had to select the FDQuery in the Object Inspector and
expand its UpdateOptions property. From there I was able to set
CheckUpdatedRecords to False. If you find that there are properties that you
cannot set from the FireDAC Query Editor, you should try using the Object
Inspector.
We are through configuring our FDQuery. The Options page of the FireDAC
Query Editor should now look similar to that shown in Figure 5-7.
126 Delphi in Depth: FireDAC
Figure 5-7: The Update Options of the query have been modified
5. It's now time to add our FDUpdateSQL component from the Tool Palette
onto our form.
6. Once the FDUpdateSQL component is in place, select the FDQuery and
set its UpdateObject property to point to the FDUpdateSQL component.
7. Next, set the FDQuery's Active property to True, since the
FDUpdateSQL will need access to the metadata that the FDQuery will
collect. If you do not do this in advance, the FDUpdateSQL component
will prompt you to execute the query, since it needs this metadata.
8. Finally double-click the FDUpdateSQL component to display Update
SQL Editor (or alternatively, right-click the FDUpdateSQL component
and select Update SQL Editor.
Chapter 5: More Data Access 127
Figure 5-8: The Update SQL Editor has been configured to generate our
UPDATE query
10. Now, all we need to do is click on the Generate SQL button. After we do
that, we can click on the SQL Commands tab to see the queries that
FireDAC has created for us. In this case, due to the properties we have
configured for the FDQuery, there are only two generated queries, one
128 Delphi in Depth: FireDAC
for the UPDATE query, and the other for the SELECT (FetchRow). The
UPDATE query is shown in Figure 5-9.
Figure 5-9: The generated UPDATE query as shown in the Update SQL
Editor
You can now run the project. After running the project, you notice that the
Dutch city that you know as The Hague, the city where the United Nation's
International Court of Justice meets, has been entered using the Dutch name
(Den Haag). Since you've had shipping issues using that name in the past, you
move your cursor to the City field for Den Haag, press F2, and enter The Hague.
Now, click the Post button in the DBNavigator to apply this update. If you now
click the Refresh button on the DBNavigator, the query will re-execute and you
will see that your update has been applied, even though the query should
technically be readonly. The updated form is shown in Figure 5-10.
130 Delphi in Depth: FireDAC
Figure 5-10: What should have been a readonly query has been updated
using an FDUpdateSQL component
With respect to the three FireDAC components that will do the heavy lifting, I
have set only five properties total to get this project to work. (Note that I
specifically avoided mentioning the FDPhysIBDriverLink and the
FDGUIxWaitCursor, since their only purpose is to supply some resources that
FireDAC needs.)
For the FDCommand, I set its Connection property to point to the
FDConnection on the data module that I describe in Appendix A (and which is
used by nearly every example project in this book). In addition, I set its
CommandText property to an SQL string, as shown here:
When you run the project, your form looks like that shown in Figure 5-12. And,
if you edit a record in the DBGrid and then click the Post button on the
DBNavigator, or simply move off that record, the update is posted to the
underlying table. You can even delete records by pressing Ctrl-Delete or
clicking the Delete button on the DBNavigator (though I don't recommend that
as you do this as it will compromise the sample database), and insert records by
pressing Ins or clicking the Insert button on the DBNavigator. And it all just
works.
Ok, I lied just a little. Because I am using the book's data module, which permits
you to change the databases for all of the book's projects by editing a single line
in the DataPaths.pas file, I did remove the auDesignTime and auRunTime flags
from the FDCommand's ActiveStoredUsage property, and I added a single line
of code to the form's OnCreate handler to open the FDCommand. But other than
that, this project is pristine.
So, what's the point of this demonstration? In short, FireDAC is very capable
even with minimal configuration. And, the best news is that you do not need to
use an FDCommand, an FDTableAdapter, and an FDMemTable in your
application. Just drop a single FDQuery into your project, define an SQL
statement to perform your data magic, and point it to an active FDConnection,
and you are ready to work.
Chapter 5: More Data Access 133
Managing Transactions
A transaction is a wrapper for operations against your database. They permit
you to signal that you are beginning to perform a task, and likewise signal when
that task is complete. More importantly, a transaction permits you to require that
the task is completed correctly, or not at all. As a result, the role of a transaction
is to ensure that your database remains in a consistent state.
Let's begin with a simple example. Your database is designed to record product
orders for your customers. A customer, using your application, orders five
items, which are added to their shopping cart. At the conclusion of the
interaction, the customer submits their order. A transaction can ensure that all
five items are posted to the OrderItems table of your database, or none of them
are (after which it is your application's responsibility to notify the customer that
something was wrong with the order). You don't want just two items ordered, or
four — you want all five or none.
Talking about transactions is complicated by the fact that not all databases
handle transactions in the same way. Some support more features than others,
and some can be pretty picky about how to start and commit transactions.
Fortunately, FireDAC handles a lot of this for you, based on the database driver
you are using. In addition, using the FDTxOptions object available with
FDConnection and FDTransaction classes, you can control some of the finer
points of transaction management without writing a lot of code.
Implicit and Explicit Transactions
FireDAC supports two types of transactions: Implicit and explicit.
An implicit transaction happens in the background without your involvement. In
short, when you execute a query without explicitly starting a transaction,
FireDAC starts one for you, and at the completion of the query, the transaction
is either committed or rolled back (depending on the success of your query).
If you don't want, or don't need, implicit transactions, you can turn them off by
setting the FDTxOptions.AutoCommit to False (the default is True).
While implicit transactions are nice, they only work at the single SQL statement
execution level. Specifically, when AutoCommit is True, each query you send to
your server is wrapped in a transaction, which means that individual queries are
ensured to complete in an all or nothing fashion, but this does little for you when
you have several operations that must be committed or rolled back as a group.
134 Delphi in Depth: FireDAC
And this is where explicit transactions come in, and they are what most database
developers think of when they talk about transactions. Explicit transactions are
begun through an explicit call to StartTransaction, after which, two or more
queries are typically executed. After the last query is invoked, the transaction is
concluded with an explicit call to Commit. If everything works properly, the call
to Commit ensures that all of the operations performed by the queries executed
while the transaction was in force are embodied in the database. In other words,
Commit ensures that the database is in a stable state. The one or more query
statements that you execute are executed atomically, meaning that they all
succeed, or none succeed.
If at least one of the SQL queries that you execute against your database cannot
be completed successfully, it is up to you to make sure that the entire
transaction, including those queries that had already succeeded, be reversed, and
no further queries that are part of the operation are attempted. Canceling the
successful queries executed during the course of a transaction is accomplished
by calling the transaction's Rollback method.
Based on this description, you should already be thinking about the program
control structure that you'll need to use, and there is one.
As soon as you start a transaction, you should enter the try block of a try-except.
If even one query raises an exception, your except block will, at a minimum,
begin with a call to roll back the transaction. This will have the effect of
undoing any work performed since the initiation of the transaction.
Similarly, the very last statement in the try block will be a call to commit the
transaction, which, if it doesn't raise an exception and branch to the except block
(which it shouldn't), the effects of two or more query operations will be present
in their complete form in the underlying database.
Explicit transactions are demonstrated in the FDExplicitTransactions project,
whose main form is shown in Figure 5-13.
Chapter 5: More Data Access 135
Code: You can find the FDExplicitTransactions project in the code download.
The top button will execute one good query and one that violates a table
constraint (a requirement that the Country field of the Customer table contains a
value found in the Country field of the Country table). But there is no
transaction to ensure atomicity — that is, complete success or complete failure.
The second button executes the same two queries, but this time within the scope
of a transaction. The third button executes two valid queries, again within the
scope of a transaction.
All three of these buttons start their processing by first deleting any records
associated with the success or failure of the INSERT queries associated with
these buttons, which ensures that each button starts from the same starting point.
In addition, after each button has attempted to insert the two records, the current
data in the Customer table is displayed in the provided DBGrid, and the last
record is made the current record.
Here is an example of how these event handlers look. In this case, I'm displaying
the properly designed event handler, which wraps the two insert queries in a
transaction, but which includes an invalid Country field value in the second
query (the Country table includes the country name England, but not United
Kingdom):
try
RemoveTestRecordsIfExist;
FDQuery1.SQL.Text := SQLstring;
FDQuery1.Prepare;
try
SharedDMVcl.FDConnection.StartTransaction;
try
FDQuery1.Params[0].Value := 'Absolute Good';
FDQuery1.Params[1].Value := 'Adrian';
FDQuery1.Params[2].Value := 'Albright';
FDQuery1.Params[3].Value := 'Allanville';
FDQuery1.Params[4].Value := 'USA';
FDQuery1.ExecSQL;
SharedDMVcl.FDConnection.Commit;
except
SharedDMVcl.FDConnection.Rollback;
end;
finally
FDQuery1.Unprepare;
end;
finally
UpdateDBGrid;
end;
end;
Figure 5-14 shows how the Customer table appears when two queries, one that
succeeds and one that fails, are executed without a transaction. Not only does an
error message get displayed, noting the constraint violation, but the DBGrid
includes one, but not both, of the query results.
Chapter 5: More Data Access 137
Figure 5-14: Since there was no transaction, one query succeeded and one
failed
By comparison, Figure 5-15 shows what happens when the same two queries are
executed in a transaction. While one query succeeded and one failed, neither of
the records was inserted.
Figure 5-15: Only one of the two queries succeeded, but no records were
inserted
138 Delphi in Depth: FireDAC
Finally, Figure 5-16 shows what happens when both queries succeed. This same
result would occur without a transaction, but the transaction is there specifically
for those cases where failure might occur.
Figure 5-16: Both queries were successful, and the transaction committed
the results
Transaction Isolation
As soon as you begin a transaction, transaction isolation determines what your
queries can see from the database, and what other connections to the database
can see about the updates you are making, that is until you have committed or
rolled back your transaction. What options you have regarding transaction
isolation depends on your database.
Nested Transactions
In theory, a nested transaction is an atomic operation that is started, and then
committed or rolled back, from inside an outer transaction, without affecting the
integrity of the outer transaction. In addition, even those nested transactions that
are successfully committed will be rolled back if the outer transaction is
ultimately rolled back. In practice, there are only two databases that support
nested transactions. These are InterBase and Firebird (an open source InterBase
spin-off).
140 Delphi in Depth: FireDAC
Asynchronous Queries
FireDAC also support the asynchronous execution of queries. For example, if
you set the ResourceOptions.CmdExecMode property of an FDQuery to
amNonBlocking, FireDAC will execute the query on a worker thread.
Importantly, FireDAC manages the details of this worker thread, destroying the
thread once the query is finished. While this makes asynchronous execution
easy to employ, you still need to observe sound multithreaded practices. For
example, a query executed asynchronously should not be connected to a
DataSource or BindSourceDB during its execution. For example, you can call
FDQuery.DisableControls to disassociate a query from its DataSource prior to
executing the query asynchronously, calling FDQuery.EnableControls from the
FDQuery.AfterOpen event handler.
When a query is running asynchronously, you can monitor its progress by
reading the query's Command.State property. Among other things, you can use
this property to determine that the query is executing, is in the process of
fetching the result set, or has completed execution and has an accessible result
set.
This project is a simple FireMonkey application that permits the editing of the
Customer table in the dbdemos.gdb InterBase database. So long as the FireDAC
Monitor utility is running before this project is loaded, all SQL operations that
FireDAC submits to the InterBase server can be viewed. Figure 5-18 depicts the
FireDAC Monitor output after a change has been made to the record shown in
Figure 5-17, and the monitor trace shown in Figure 5-18 represents FireDAC's
updates when the Post button on the BindNavigator is clicked.
Chapter 5: More Data Access 143
In the following chapter, I show you how to navigate and edit FireDAC datasets.
Chapter 6: Navigating and Editing Data 145
Chapter 6
Navigating and
Editing Data
This chapter describes basic operations that are universal to most TDataSet
descendants, with particular attention paid to FireDAC datasets. If you are a
seasoned Delphi database developer, you will likely be familiar with most, if not
all, of the topics that I introduce in this chapter. In that case, you may want to
scan through this chapter.
If you are new to Delphi database development, this chapter is designed to
familiarize you with the fundamentals of working with the data exposed by
datasets programmatically. It begins with a discussion of Fields, the classes that
permit you to read and write data from individual columns in your datasets.
This chapter continues with an introduction to the concept of the current record,
followed by a look at the methods and properties that you can use to navigate
and edit your data programmatically.
Understanding Fields
There are two basic uses for datasets. The first is to initiate an operation on the
connected database, such as creating a new table, altering an index, or inserting
data using an SQL INSERT statement. These types of operations do not return a
result set.
The second use of a dataset, typically the most common one, is to hold a
reference to a tabular structure that consists of zero or more rows and one or
more columns. This structure often comes from an SQL SELECT statement or
the execution of a stored procedure. In Delphi’s terminology, the rows are
referred to as records and the columns are called fields.
The term field also refers to instances of a class that descends from TField, and
you use them to read and write data returned in columns, as well as read the
associated metadata. For example, fields that contain text are often members of
146 Delphi in Depth: FireDAC
the TStringField class. This class supports an AsString property of type string
that can be used to read the text from the corresponding column. Similarly, so
long as the corresponding column is not readonly, you can write to the
underlying field by assigning text to the AsString property.
More accurately, TStringField, like other TField descendants, inherits AsString
from TField. In addition to AsString, there are many other properties that permit
you to read, and sometimes write, data using other data types. These properties
include AsBoolean, AsInteger, and AsFloat, as well as a Value property, which
is of type variant.
Metadata inherited from TField permits you to determine the data type of the
underlying data from which the field gets its data (DataType), as well as other
information, such as the maximum length of a text field (Size), or whether the
field is a key field in an underlying table (pfInKey in ProviderFlags).
You access the fields of a dataset through its Fields property, or from its
FieldByName, FieldByNumber, or FindField methods. The Fields property of
TDataSet is a reference to a TFields instance, and the default property of
TFields is an indexed property called Fields. As a result, the following code
refers to the data in the first column returned by a FireDAC query:
var
Data: Variant;
begin
Data := FDQuery1.Fields.Fields[0].Value
Since TFields.Fields is the default property, you can omit the reference to
TDataSet.TFields.Fields and simply index the TDataSet.Fields property, as
shown here:
var
Data: Variant;
begin
Data := FDQuery1.Fields[0].Value;
Chapter 6: Navigating and Editing Data 147
Since Delphi’s supported databases can return null values in one or more
columns, there are times when a field does not have a value. Nonetheless, if you
attempt to read a null field it might actually return a value, such an empty string
for a string field, or 0 for a null integer field. However, a null value is not an
empty string or zero. As a result, if a null value is a possibility, you may want to
first test for a null value using the TField.IsNull property. Such a test might look
something like the following:
var
Data: Variant;
begin
if not FDQuery1.Fields[0].IsNull then
Data := FDQuery1.Fields[0].Value;
There are additional event handlers that are available in most situations where a
dataset is being navigated and edited, and which are always available when
data-aware VCL controls are used. These are the event handlers associated with
a DataSource. Since all data-aware VCL controls must be connected to at least
one DataSource, the event handlers of a DataSource provide you with another
source of customization when a user navigates and edits records. These event
handlers are: OnDataChange, OnStateChange, and OnUpdateData.
Code: The code project FDNavigation is available from the code download. See
Appendix A for details.
Of somewhat greater interest are the event handlers associated with the
DataSource in this project. For example, the OnDataChange event handler is
used to display which record in the FDQuery is the current record in the first
panel of the StatusBar, which appears at the bottom of the main form, as shown
in Figure 6-1.
150 Delphi in Depth: FireDAC
Figure 6-1: The StatusBar on the main form of the FDNavigation project is
updated by OnDataChange and OnStateChange event handlers
This OnDataChange event handler not only displays the current record number,
and how many records are in the FDQuery, but also indicates in the fourth panel
of the StatusBar whether an attempt was made to navigate above the first record
(Bof) or below the last record (Eof). Bof and Eof are described in a bit more
detail a little later in this chapter.
The following is the OnDataChange event handler for the DataSource in this
project:
Note: The value returned by the RecordCount property depends on how you
have configured your dataset. Specifically, the Mode, RecordCountMode,
MaxRecs, and RowSetSize property of FetchOptions will all affect the number of
records returned in the datasets, as well as how RecordCount operates.
The OnStateChange event handler also updates the StatusBar shown in Figure
6-1. In this event handler, RTTI (runtime type information) is used to display
the human-readable version of the FDQuery's State property, which is of type
TDataSetState. Here is the definition of TDataSetState:
In Figure 6-1, the FDQuery is in the Browse state, as seen in the third panel of
the StatusBar. The following is the OnStateChange event handler used by the
DataSource:
Calculating Performance
There is one more piece of information that is written to the status bar, and that
is the length of time that a particular operation takes. In this project, and several
others discussed in later chapters, a TStopWatch is used to calculate how long a
particular operation takes. TStopWatch is a record type declared in the
System.Diagnostics unit, and it makes calculating latency of operations easy.
I have introduced two helper subroutines to assist in calculating how long
operations take, Start and Complete. Here are these methods, which are self-
explanatory:
procedure Start;
begin
StopWatch := TStopWatch.StartNew;
end;
procedure Complete;
begin
StopWatch.Stop;
Form1.StatusBar1.Panels[0].Text := 'Elapse time: ' +
StopWatch.ElapsedMilliseconds.ToString + ' milliseconds';
end;
Figure 6-2 shows the output of these routines in the first segment of the status
bar after a forward scan operation has been executed. In this case, disable
controls was not used and the scan took approximately 30 milliseconds.
Chapter 6: Navigating and Editing Data 153
Navigating Programmatically
In Chapter 4, Basic Data Access, I discussed data binding, both with VCL data-
aware controls as well as with Live Bindings. That discussion demonstrated how
a user could navigate and edit data through the user interface.
This section looks at navigation whether or not data-aware controls are
involved, for example, to move to a record that you want to change
programmatically. For a dataset, these core navigation methods include: First,
Next, Prior, Last, MoveBy, RecNo, and GotoBookmark. Each of these is
described in the following sections.
Before I continue, I want to differentiate between navigating and searching.
Navigating refers to moving to another record relative to the current record,
making the new record the current record. Searching, by comparison, involves
an attempt to locate a record with a particular value or set of values in its fields.
I am covering navigation in this chapter, and cover searching in Chapter 8,
Searching Data. Bookmarks, a topics that I cover in this chapter, is more similar
to searching than the other navigational techniques that I cover in this chapter,
154 Delphi in Depth: FireDAC
but I included it here since I felt it had more in common with navigation than the
searching techniques that I cover in Chapter 8.
Basic Navigation
The TDataSet interface provides for basic navigation through four methods:
First, Next, Prior, and Last. These methods are pretty much self-explanatory.
Each one produces an effect similar to the corresponding button on a
DBNavigator.
There is another type of navigation that is similar to First, Next, and so forth.
That navigation, however, is associated with a filter, and is referred to as filtered
navigation. Filtered navigation is discussed in detail in Chapter 9, Filtering
Data.
Have I Gone Too Far? Bof and Eof
You can determine whether an attempt to navigate has tried to move outside of
the range of records in the dataset by reading the Bof (beginning-of-file) and
Eof (end-of-file) properties. Eof will return True if a navigation method
attempted to move beyond the end of the table. When Eof returns True, the
current record is the last record in the dataset.
Similarly, Bof will return True if a navigation attempt tried to move before the
beginning of the dataset. In that situation, the current record is the first record in
the dataset.
Bof and Eof were used in the OnDataChange event handler shown earlier in this
chapter, to indicate in the StatusBar whether the last navigation attempt tried to
move before, or beyond the records of the FDQuery, respectively. Figure 6-3
shows how the main form of the FDNavigation project looks when you have
used MoveBy to attempt to move beyond the end of the FDMemTable.
Chapter 6: Navigating and Editing Data 155
Figure 6-3: An attempt to use MoveBy to move beyond the end of the
FDQuery has set Eof to True
Using MoveBy
MoveBy permits you to move forward and backward in a dataset, relative to the
current record. For example, the following statement will move the current
cursor five records forward in the FDQuery (if possible):
FDQuery1.MoveBy(5);
FDQuery1.MoveBy(-100);
FDQuery1.RecNo := 5;
The description of the preceding example was qualified by the statement that the
operation will succeed if possible. This qualification has two aspects to it. First,
the cursor movement will not take place if the current record has been edited,
but cannot be posted. For example, if a record has been edited but not yet
posted, and the data cannot pass at least one of the dataset's constraints,
attempting to navigate off that record will raise an exception and the navigation
will not occur.
The second situation where the record navigation might not be possible is
related to the number of records in the dataset. Attempting to set RecNo to a
record beyond the end of the table, or prior to the beginning of the table, raises
an exception.
The use of RecNo is demonstrated in the following event handler, which is
associated with the button labeled RecNo =:
It is important to recognize that between the time you call DisableControls and
EnableControls, the dataset is in an abnormal state (the GUI is detached, as least
with respect to the dataset). In fact, if you call DisableControls and never call a
corresponding EnableControls, the dataset will appear to the user to have
stopped functioning based on the lack of activity in the data-aware controls. As
a result, it is essential that if you call DisableControls, you structure your code
in such a way that a call to EnableControls is guaranteed. One way to do this is
to enter a try block immediately after a call to DisableControls, invoking the
corresponding EnableControls in the finally block.
This is demonstrated in the OnClick event handler associated with the button
labeled Scan Forward on the main form of the FDNavigation project. In this
event handler, if the RadioGroup named ControlsStateBtnGrp is set to 1 (the
Disabled radio button is selected), the DisableControls method of the dataset is
called before the scanning takes place. Furthermore, once the scanning is
complete, the finally clause ensures that controls are once again enabled (if they
were initially disabled):
Start;
if ControlsStateBtnGrp.ItemIndex = 1 then
FDQuery1.DisableControls;
try
FDQuery1.First;
while not FDQuery1.Eof do
begin
//do something with a record
FDQuery1.Next;
end;
finally
if ControlsStateBtnGrp.ItemIndex = 1 then
FDQuery1.EnableControls;
end;
Complete;
end;
private
{ Private declarations }
FBookmark: TBookmark;
Initially, the GotoBookmark button is not enabled. It is enabled only after you
have created a bookmark by clicking the button labeled Get Bookmark. This is
shown in the OnClick event handler of this button:
end;
Once you have created a bookmark, if you navigate off of the record, you can
make it the current record again by calling GotoBookmark as shown in the
following code:
Editing a DataSet
While not technically a navigation issue, one of the primary reasons for
navigating a dataset programmatically is to locate a record that you want to
change. Since this topic is covered only in passing in other chapters of this book,
I am going to take this opportunity to discuss the programmatic editing of
records in a dataset in a bit more detail here.
You edit a current record in a dataset by calling its Edit method, after which you
can change the values of one or more of its Fields. As mentioned earlier in this
chapter, those changed values are stored in the current record’s record buffer
until that record is posted. After changing one or more fields in the current
record, you can post the record by explicitly calling the Post method.
Alternatively, you can simple navigate off the record to attempt to post the new
values.
If you modify a record, and then decide not to post the change, or discover that
you cannot post the change, you can cancel all changes to the record by calling
the dataset's Cancel method. For example, if you change a record, and then find
162 Delphi in Depth: FireDAC
that calling Post raises an exception, you can call Cancel to cancel the changes
and return the dataset to the dsBrowse state.
To insert and post a record, you have several options. You can call Insert or
Append, after which your cursor will be on a newly inserted record, assuming
that you started from the dsBrowse state. If you were editing a record prior to
calling Insert or Append, a new record will not be inserted if the record being
edited cannot be posted. Once a record is successfully inserted, assign data to
the Fields of that record and call Post to post those changes (or successfully
navigate off of the record).
Note: So long as there is no active index to control the record order, Insert will
insert the new record at the position of the current record, while Append will
add the new record to the end of the dataset.
You include in the constant array the data values that you want to assign to each
field in the dataset. If you want to leave a particular field unassigned, include the
value null in the constant array. Fields you want to leave unassigned at the end
of the record can be omitted from the constant array.
For example, if you are inserting and posting a new record into a four-field
FDQuery, and you want to assign the first field the value 1000 (a field
associated with a unique index), leave the second and fourth fields unassigned,
but assign the value 'new' to the third record, your InsertRecord invocation may
look something like this:
mode (which corresponds to the dsEdit state) and cannot be posted, the change
is canceled. If the record cannot even be placed into edit state, which for a
dataset should only happen if the dataset’s CanModify property is false, or if the
FireDAC dataset’s UpdateOptions property
(TFDBottomUpdateOptions.EnableUpdate and
TFDBottomUpdateOptions.ReadOnly) do not allow editing, the attempt to post
changes is skipped:
In the next chapter, I show you how to create and use indexes.
Chapter 7: Creating Indexes 165
Chapter 7
Creating Indexes
In many respects, an index on a FireDAC dataset is like that on any other
DataSet descendant. Specifically, an index controls the order of records in the
dataset, as well as enables or enhances a variety of other operations based on
record-specific contents, such as searches, ranges, and dataset linking.
Unlike a dataset's structure, which is normally obtained from an SQL SELECT
statement or the execution of a stored procedure that returns a result set, a
dataset's indexes are not. Specifically, when a FireDAC dataset is loaded with
data obtained from a result set, or is loaded from a previously saved file, the
dataset's structure is largely (and usually entirely) defined by the result set, or
defined by the metadata loaded from the saved file. Indexes, by comparison, are
defined for the dataset’s data contents explicitly.
Even if the underlying table in the database has indexes, an SQL SELECT from
that table will not result in an index in the dataset. In addition, the dataset can
have indexes even when the table in the database does not. In other words, a
database’s indexes and a FireDAC dataset’s indexes are completely
independent.
Consider the Customer table found in the example employee.gdb InterBase
database that ships with Delphi. There are four customer table-related indexes
present in the database: RDB$PRIMARY1, CUSTNAMEX, CUSTREGION,
and RDB$FOREIGN23. Accessing that data using an FDTable or an FDQuery
with a SELECT * FROM Customer query will load that table’s data into the
dataset, and its structure will reflect that of the Customer table. But after the
execution of the query, the FireDAC dataset will not have any indexes. Indexes,
if you want them, must be defined explicitly for the FireDAC dataset either at
design time or at runtime.
This is not to say that FireDAC ignores constraints defined for tables in a
database. Indeed, by default, it observes and enforces primary keys, unique
keys, foreign keys, and other constraints. These operations are controlled though
the UpdateOptions properties, including CheckReadonly, CheckRequired, and
CheckUpdatable, which you can customize to override those default behaviors.
166 Delphi in Depth: FireDAC
Index Overview
An index is a data structure that contains pointers to the data in your dataset,
which in most cases is about one or more columns. When a dataset has more
than one index, each index is based on either a different column, a collection of
the dataset’s columns, or expressions based on data in the dataset’s columns.
The data in each index is ordered, permitting it to be searched very quickly.
Indexes serve five distinct purposes. These are to:
Guarantee uniqueness of table data
Provide sorted views of data
Provide quick searches for records
Allow for master-detail joins
Support customized views of the data based on expressions or filters
First of all, an index can be used to prevent duplicate data from appearing in a
table. This type of index is referred to as a unique index. A unique index can
prevent duplicate values from being inserted into your dataset, even if the value
that you want to assure uniqueness is not part of the primary key.
The second purpose of an index is to provide sorted views of the data in a
dataset. The index does not actually change the order of the records in a dataset's
data store, but instead causes those records to behave as if they were ordered
based on the index.
The third purpose of an index is to quickly locate one or more records based on
data in the indexed fields. These indexes may or may not be unique indexes,
depending on the fields used to define the index. Since indexes are already
sorted, attempting to located records based on the one or more fields of an index
means that the data can be searched very quickly in order to locate the
associated records. Searching datasets is discussed in detail in Chapter 8,
Searching Data.
The fourth purpose of an index is closely related to the second and third
purposes. An index can be used to support master-detail joins between data in
two related datasets.
Finally, an index can restrict the dataset’s result set in a particular view.
Specifically, an index can suppress some of the records in a FireDAC dataset,
Chapter 7: Creating Indexes 167
Temporary Indexes
Temporary indexes are created with the IndexFieldNames property. To create a
temporary index, set the IndexFieldNames property to the name of the field or
fields on which you want to base the index. When you need a multi-field index,
separate the field names with semicolons.
Note: The name temporary index is derived from the ClientDataSet class, which
introduced the IndexFieldNames property. In the ClientDataSet, the index
created when you assign a string to IndexFieldNames is transient, in that it is
destroyed when you assign a different string to the IndexFieldNames property.
FireDAC datasets, by comparison, do not destroy an existing index created with
IndexFieldNames when you change the value of this property. Nonetheless, I
have opted to continue to refer to indexes created using IndexFieldNames as
temporary indexes.
LastName;FirstName
LastName:D;FirstName;
FDQuery1.IndexFieldNames := 'LastName:D;FirstName';
When you assign a value to the dataset's IndexFieldNames property, the dataset
immediately generates the index if it does not already exist. If the contents of
the data are being displayed, once the index has been created, those records will
appear sorted based on the fields of the index, with the first field in the index
sorted first, followed the second field (if present), and so on.
Once the index is active, it is maintained. For example, each time you insert,
update, or delete data from the associated dataset, the index is updated to reflect
these changes.
Code: The project FDFilter is in the code download. See Appendix A for details.
Figure 7-1: The main form of the FDFilter project. The button labeled
"Select Index" permits a user to choose their own index at runtime
In order to enter an index, the user clicks the Select Index button. This button is
associated with the OnClick event handler that appears at the top of the
following code segment. This event handler calls the
GetTemporaryIndexFromUser custom method, passing in the currently selected
index. If GetTemporaryIndexFromUser does not raise an exception, it returns a
validated string that can be used to request a temporary index. At this point, the
value is assigned to the Edit, which in turn triggers an OnChange event for the
Edit. The Edit's OnChange event assigns the value in the Edit to the
FDMemTable's IndexFieldNames property.
The real work is performed by GetTemporaryIndexFromUser, and this code is
shown below. This method begins by displaying an InputQuery dialog box,
requesting the index text from the user. (A custom dialog box could make this
170 Delphi in Depth: FireDAC
function TForm1.GetTemporaryIndexFromUser(
CurrentIndex: String): String;
//local function to verify field list
try
if SList.Count = 0 then
begin
Exit( True );
end
else
for s1 in SList do
begin
if UpperCase(s1).EndsWith(':D') then
s2 := copy( s1, 1, Length(s1) - 2)
else
s2 := s1;
if FDDataSet.FindField(s2) = nil then
exit( False );
end;
Result := True;
finally
SList.Free;
end;
end; //function FieldsValid
begin
Result := IndexFieldNamesEdit.Text;
if InputQuery('Enter index field(s) [ex: field1;field2:D]',
'Index on', Result) then
begin
while Pos(' ',Result) <> 0 do
Result := StringReplace(Result, ' ',
'', [rfReplaceAll]);
if not FieldsValid(FDMemTable1, Result) then
raise EBadFieldInIndex.Create('IndexFieldNames ' +
'contains at least one invalid field name');
end; //if InputQuery
end;
In addition, adding the soUnique flag will ensure that the combination of values
in the fields of the index are unique, which will cause an exception to be raised
when the index is applied if the data already includes duplicate values.
Unfortunately, the soNoCase flag default value is False, which should produce a
case-sensitive index, but temporary indexes are case insensitive by default. I
have not been able to create a case-sensitive index using temporary indexes.
Temporary indexes are extremely useful in a number of situations, such as in the
preceding example where you want to permit your users to sort the data based
on any field or field combination. There are, however, some drawbacks to
temporary indexes.
Specifically, temporary indexes do not support more advanced index options,
such as distinct indexes, expression indexes, and filter-based indexes. If you
need some of these more sophisticated features, you will need to create
persistent indexes.
Persistent Indexes
Persistent indexes, when created, are similar to temporary indexes in many
ways. Unlike temporary indexes, though, they support a variety of options not
available for temporary indexes.
All FireDAC datasets support persistent indexes based on a collection of
FDIndex objects, each of which can represent the properties that define the
associated index. The FDIndex collection can be accessed through the Indexes
property of the FireDAC datasets to which the individual indexes belong.
FDMemTables support a second type of persistent index based on IndexDefs (a
collection of index definitions). IndexDefs support most, but not all of the
features supported by the FDIndex class, and are exposed by FDMemTables in
order to provide source code compatibility with the ClientDataSet class, which
only supports persistent indexes through IndexDefs.
In this chapter, I am going to focus on creating persistent indexes using the
Indexes property of FireDAC datasets. I have made this decision for several
reasons. First, all FireDAC datasets support a published Indexes property, while
only the FDMemTable exposes the IndexDefs property as a published property.
Second, FDIndex instances support all of the features supported by IndexDefs,
but IndexDefs do not support all of the features of FDIndex instances. Finally,
even when you are using an FDMemTable, you can use IndexDefs or Indexes,
but not both. As a result, the only reason to use IndexDefs with an
FDMemTable is to provide source code compatibility with ClientDataSets. For
Chapter 7: Creating Indexes 173
this reason, I will not be covering IndexDefs in this book, though you can apply
much of what you learn about FDIndexes here to the use of IndexDefs, if
necessary.
Defining Persistent Indexes
Persistent indexes are defined at design time using the Indexes collection
property editor of the FireDAC dataset. To display this collection editor, select
the Indexes property of a FireDAC dataset in the Object Inspector and click the
ellipsis button that appears.
Click the Add New button on the Indexes collection editor toolbar (or press the
Ins key) once for each persistent index that you want to define for a FireDAC
dataset. Each time you click the Add New button (or press Ins), a new FDIndex
is created. Complete the index definition by selecting each FDIndex in the
Indexes collection editor, one at a time, and configure it using the Object
Inspector. The Object Inspector with an FDIndex selected is shown in Figure 7-
2. Note that the Options property has been expanded to show its various flags.
174 Delphi in Depth: FireDAC
Code: This sample code is found in the FDIndexes project of the code
download.
At a minimum, you must define what to index. There are two ways to do this.
The first is to set the Fields property of an FDIndex to the name of the field or
fields to be indexed. Similar to how you use the IndexFieldNames property, if
you are building a multi-field index, you separate the field names with
semicolons. You cannot include virtual fields, such as Calculated or Aggregate
fields, in an index, though you can use InternalCalc virtual fields in an index.
Also similar to IndexFieldNames, the Fields property of a FireDAC dataset can
include the :D modifier following a particular field name to sort that field in
descending order. You can also include :A and :N modifiers after one or more
fields, but as I described earlier, these do not appear to have an effect.
Chapter 7: Creating Indexes 175
The second way to define the index is to set the Expression property. The
expression property can contain any value expression, which may include
references to fields, constants, FireDAC scalar functions, and expression engine
functions. When using an expression index, a value is calculated for each record
in the result set using that expression, and it is this value that serves as the basis
for the index.
You can use the Fields property or the Expression property, but not both. If you
have one of the properties set to a value, say Fields, and then assign a value to
the other (Expression in this case), the other property is set to an empty string.
By default, FDIndexes are ascending indexes. If you want the index to be a
descending index, set the soDescending flag in the Options property.
Alternatively, you can set the DescFields property to a semicolon-separated list
of the fields that you want sorted in descending order. DescFields provides an
alternative to using the :D modifier in the Fields property.
As you might imagine, you use the DescFields property only when the Fields
property is used to define the index, and the DescFields property is limited to
including only fields that appear in the Fields property. So long as the
soDescending flag is absent from the Options property of the FDIndex, fields
that appears in the DescFields property will be sorted in descending order, and
the remaining fields of the Fields property that do not appear in DescFields will
be sorted in ascending order.
The following sections demonstrate the creation of four types of indexes, both at
design time and at runtime. These indexes include basic field-based indexes,
expression-based indexes, distinct indexes, and finally, filter-based indexes.
The examples given here make use of the FDIndexes project, found in the code
download. In this project, you will find a number of indexes that were defined at
design time, along with code that defines additional indexes at runtime.
In the following sections, I walk you through the creation of indexes at design
time. If you want to follow these steps, I suggest that you make a copy of the
FDIndexes project, and place it in a directory at the same level as FDIndexes.
For example, if you have extracted the sample code files and stored them in a
directory named FireDAC on your C drive, the FDIndexes project will be
located in the following directory path:
C:\FireDAC\FDIndexes
176 Delphi in Depth: FireDAC
Create a new directory under FireDAC named FDIndexesCopy and copy the
files from C:\FireDAC\FDIndexes, and paste those copied files in
C:\FireDAC\FDIndexesCopy.
Creating a copy of the FDIndexes project, and placing it in a directory parallel
to the original files is essential, in that it will permit the code to find the sample
FDMemTable table that it needs for data. This file is in a directory called
BigMemTable, and the corresponding file contains close to 25,000 records.
Once you have copied the original files to the new directory, open the copied
project and select the FDMemTable. Select the Indexes property in the Object
Inspector and double-click the ellipsis button to display the existing indexes in
the Indexes collection editor, as shown here:
To delete these indexes, right-click inside the collection editor and choose Select
All from the displayed context menu. Next, press Del to delete these existing
indexes. You might also have to delete the generated IndexDefs as well. You are
now ready to proceed with the hands-on examples described in the following
sections.
Before you start, however, you will want to use the FireDAC FDMemTable to
load the BigMemTable table from a file. To do this, right-click the
FDMemTable and select Load From File from its displayed context menu.
Navigate to the BigMemTable directory, which should be parallel to your
current directory. Select the BigMemTable.Xml file, as shown in Figure 7-3,
and click open.
Chapter 7: Creating Indexes 177
Figure 7-3: Select the BigMemTable.XML file to load it into the FireDAC
FDMemTable
At this point the FDMemTable should load the data from BigMemTable.xml,
and your form should look like that shown in Figure 7-4.
6. The final step is to set the Active property of this index to True. This will
inform FireDAC that it can select this index. If you try to use the index
without setting its Active property to True, FireDAC will throw an
exception if you try to set the FDMemTable’s IndexName property to
the name of this index.
SELECTING AN INDEX AT DESIGN TIME
Once you have defined an index, you can assign the name of that index to the
IndexName property of the FireDAC dataset. If the dataset is not active, that
index will not be created, and applied, until you subsequently make the dataset
active.
On the other hand, if you have made the FireDAC dataset active at design time,
assigning the name of the Index to the IndexName property of the dataset will
cause that index to be constructed and then applied. This is demonstrated in the
following steps:
1. Select the FDMemTable.
2. Use the Object Inspector to set the IndexName property of the
FDMemTable to FirstGuidDTIdx. After a brief moment, FireDAC will
display the records in a sort order based on the index, where the
FirstName field is first sorted in ascending order, following by GUID in
descending order. Your form might look something like that shown in
Figure 7-5.
var
Index: TFDIndex;
begin
// A field-based index
Index := FDMemTable1.Indexes.Add;
Index.Name := 'LastGuidRTIdx';
Index.Fields := 'Last_Name;GUID:D';
Index.Active := True;
In this case, I did not use the DescFields property, but instead decorated the
GUID field name in the Fields property with the :D modifier. Both techniques
produce the same results, which is to sort the GUID field in descending order.
SELECTING AN INDEX AT RUNTIME
You select an active FDIndex at runtime using the same technique as you do at
design time. Specifically, you assign the name of the index to the IndexName
property of the FireDAC dataset. To assist you, the FDIndex class includes a
small collection of methods that can help you find your index. These include the
FindIndex, IndexByName, and FindIndexForFields methods, as well as the
Items property.
The FDIndexes project includes a little code that iterates through all currently
defined indexes at runtime, populating a combobox with the names of the
available, active indexes. This code, which is called from the OnCreate event
handler of the form, in shown here:
procedure TForm1.ListIndexes;
var
i: Integer;
Chapter 7: Creating Indexes 181
begin
for i := 0 to FDMemTable1.Indexes.Count - 1 do
if FDMemTAble1.Indexes[i].Active then
cbxIndexes.Items.Add( FDMemTable1.Indexes.Items[i].Name );
cbxIndexes.ItemIndex := -1;
end;
Figure 7-6 shows is how the form looks at runtime after one of the active
indexes has been selected.
Figure 7-6: An active index has been selected from the combobox, resulting
in a sorting of the record shown on the form
In the remaining sections of this chapter, I demonstrate three more index types:
Expression indexes, distinct indexes, and filter indexes. In each of these
182 Delphi in Depth: FireDAC
sections, I walk you through the process of creating an index at design time, and
then show the code that creates the corresponding index at runtime.
Creating Expression Indexes
Unlike field-based indexes, which are largely based on the data in the columns
of the FireDAC dataset, an expression index is based on an expression. That
expression can include field names, constants, expression engine functions,
FireDAC scalar functions, and complex expressions involving one or more of
these combined with operators. While a simple expression might include only a
field name, say for example, FirstName, the real power of an expression index is
that the expression is calculated separately for each record, after which, the
dataset is ordered by the expression.
FireDAC provides support for a large number of scalar functions that you can
use in expressions, in filters, and in the SQL command preprocessor. These
include string functions, such as SUBSTRING(), UPPERCASE(), and
LENGTH(), and math and arithmetic functions, such as SIN(), LOG(), and
RAND() (random number). Date and time functions include
CURRENT_DATE() and DAYOFWEEK(), and system functions include
DATABASE() (the name of the database to which you are connected),
NEWGUID(), and IIF (immediate if).
Note: In order to use scalar functions from the FireDAC expression engine at
runtime, you must add the FireDAC.Stan.ExprFuncs unit to at least one of your
project’s uses clauses. Also, if you find that some of these functions, such as
LENGTH, raise an exception when you use them against the InterBase
database, see Appendix A for information on correcting those errors.
For a complete list of the FireDAC scalar functions, please refer to Tables 14-3
through 14-6 in Chapter 14, The SQL Command Preprocessor.
Use the following steps to create an index that will sort the FDMemTable based
on the length of the LastName field:
1. Open the FDMemTable’s Indexes collection editor and add a new index.
2. With this new index selected in the Indexes collection editor, set
Expression to the following string: LENGTH(LastName).
3. Set IndexName to LengthLastNameDTIdx.
4. Set the Active property to True.
Chapter 7: Creating Indexes 183
If you now select the FDMemTable and set its IndexName property to
LengthLastNameDTIdx, you will see the data sorted by the length of the
LastName field, as shown in Figure 7-7.
Figure 7-7: The data is sorted by the length of the LastName field
To make things interesting, the following code shows how to create a runtime
index that sorts the records randomly. This code will define an expression index
where the rand() expression engine function is seeded with a random number
between 0 and MaxInt. Here is the code that creates this index. Figure 7-8 shows
the data sorted by this index.
// An expression index
Index := FDMemTable1.Indexes.Add;
Index.Name := 'RandomRTIdx';
Index.Expression := 'RAND( ' + RandomRange(0, MaxInt) + ')';
Index.Active := True;
184 Delphi in Depth: FireDAC
such as predicates in an SQL SELECT WHERE clause that returns only those
records that match some criteria.
Use the following steps to create a distinct index using the FDIndexes project:
1. Create a new index for the FDMemTable.
2. With your newly created index selected in the Indexes collection editor,
use the Object Inspector to set Fields to City.
3. Next, set the Name property of the FDIndex to CityDistinctDTIdx.
4. Now set the Distinct property of the FDIndex to True.
5. Finally, set the Active property to True.
If the FDMemTable were active, and you were to set the IndexName property to
CityDistinctDTIdx, the FDMemTable would display only one record for each
different City field value in the dataset, as shown in Figure 7-9.
Figure 7-9: A distinct index is displaying 10752 unique city names in the
BigMemTable database
The FDIndexes project includes code that creates a distinct index at runtime.
This code creates an index that displays unique city/state combinations, as you
can see in the following code segment:
// A distinct index
Index := FDMemTable1.Indexes.Add;
Index.Name := 'CityStateDistinctRTIdx';
186 Delphi in Depth: FireDAC
Index.Fields := 'City;State';
Index.Distinct := True;
Index.Active := True;
Figure 7-10 shows this index selected in the running project. Since there is the
possibility that two or more states share cities with the same names, this view
contains more records than the previous figure, almost 15,000, compared to just
under 11,000 in Figure 7-9.
Figure 7-10: A distinct index is displaying only one record for each unique
city/state field combination
In short, a filter limits which records are displayed to some subset of records
based on the data appearing in the fields of the dataset. For example, a filter may
be used to show only records where account holders have credit limits that
exceed some value, or whose invoices are past due greater than some number of
days. As a result, filters permit you to select some larger set of data from your
underlying database, and then manipulate which records are displayed or
navigable without have to re-execute the query that originally obtained the data.
When an index employs a filter expression, selecting that index does two things.
It orders the records based on the index, and it limits which records are
accessible based on the filter.
Filter-based indexes are unique to FireDAC. In most other Delphi data-access
frameworks, when you want to place a filter, you either do so without an index,
which can be relatively slow, or you first apply an index and then use one of the
filtering mechanisms that are index based. This second approach is almost
universally faster than filters that do not require an index.
Since FireDAC has introduced filter-based indexes, it provides for very fast
filtering based on the index. As a result, the filters are applied with fewer lines
of code and with ultra-fast results.
Use the following steps to create a filter-based index:
1. Create a new index for the FDMemTable.
2. With your newly created index selected in the Indexes collection editor,
use the Object Inspector to set Fields to FirstName.
3. Select Filter and enter the following expression: [FirstName] = 'K*'.
4. Expand the FilterOptions property and check the ekPartial flag.
5. Set the DescFields property to FirstName.
6. Finally, set the Name property to KNamesFilterDTIdx.
This filter, when applied, will display only records where the FirstName field
begins with the letter K, and the names will be in descending order. Figure 7-11
shows how the FDIndexes project looks when this filter is selected at design
time.
188 Delphi in Depth: FireDAC
begin
s := '';
if InputQuery('Enter pattern for FirstName', 'Pattern', s) then
begin
Index := FDMemTable1.Indexes.Add;
Index.Name := 'FirstName=' + s + 'RTIdx';
Index.Fields := 'FirstName';
Index.Filter := '[FirstName] = ''' + s + '''';
Index.FilterOptions := [ekPartial];
Index.Active := True;
cbxIndexes.Items.Add( Index.Name );
cbxIndexes.ItemIndex := cbxIndexes.Items.Count - 1;
end
else
exit;
end;
Start;
FDMemTable1.IndexName := cbxIndexes.Text;
Complete;
end;
end;
Figure 7-12: A runtime filter-based index has sorted and filtered the
dataset
190 Delphi in Depth: FireDAC
begin
Result := False;
Field := DataSet.Fields.FindField(FieldName);
if not Assigned(Field) then
exit;
if SameText(DataSet.IndexName,
GetIndexName(FieldName, idAscending)) then
192 Delphi in Depth: FireDAC
DataSet.IndexName := NewIndex.Name;
Result := True;
end;
As you can see, this code starts by verifying that the field name that was passed
is a field in the FireDAC dataset defined in the first parameter, and also that it is
a field type that is sortable. Next, the code checks to see if the dataset is already
sorted using the ascending index name pattern, which is FieldName_AIdx. If a
match is found, the descending index name pattern (FieldName_DIdx) is either
selected or created and selected. Otherwise, the ascending pattern index name is
selected or created and selected.
The second sorting mechanism does not rely on FDIndexes, but instead makes
use of the IndexFieldNames property of FireDAC datasets. Like the
SortFDDataSetWithIndex method, SortFDDataSetWithFieldName takes two
parameters, a FireDAC dataset and a field name. It also returns a Boolean result
indicating success of failure to sort. This method is shown here:
if SameText(DataSet.IndexFieldNames, FieldName) or
SameText(DataSet.IndexFieldNames, FieldName + ':A') then
DataSet.IndexFieldNames := FieldName + ':D'
else
DataSet.IndexFieldNames := FieldName + ':A';
Chapter 7: Creating Indexes 193
Result := True;
end;
The first few lines of this method match that of the previous method. Both the
field name and suitability for sorting is verified. The remainder of the code is
similar in philosophy, but much simpler in execution. If the current value of
IndexFieldNames matches either the field name or the field name with the A
(ascending) modifier, then the IndexFieldNames property is assigned the name
of the field with the D (descending) modifier. Otherwise, IndexFieldNames is
assigned the field name with the D modifier.
You call one of these methods from the OnTitleClick method of a DBGrid (or
some other method if you like, but then you’ll have to determine yourself which
values to pass in the two parameters). OnTitleClick executes when the user has
clicked the header of a DBGrid column and the DBGrid’s Options include the
dgTitleClick flag. Here is the OnClick event handler found in the FDIndexes
project, in which I have commented out the call to
SortFDDataSetWithFieldName:
Referring back to the source code for these two sorting methods, when you
inspect these methods, you will conclude that the first method is more
complicated, which is true. Why, then, would you consider using the first
method? The answer is that although it is more complicated, it produces a
different result, other than simply sorting the dataset. The first method also
creates an FDIndex instance.
This is a subtle distinction, and you will probably rarely really care, since both
methods produce similar results from the user’s perspective. But the difference
is real. Consider Figure 7-13. In this figure, I am using the first method,
SortFDDataSetWithIndex, and I have clicked on a number of different column
headings, after which, I refreshed the index list and then have dropped down the
194 Delphi in Depth: FireDAC
index-selecting combobox and scrolled to the bottom. In the combobox, you can
see the FDIndex names that have been created by SortFDDataSetWithIndex.
In the next two chapters, I will take an in-depth look at searching and filtering.
Chapter 8: Searching Data 197
Chapter 8
Searching Data
In the context of this chapter, searching means attempting to locate a record
based on specific data that it contains. For example, attempting to find a record
for a particular customer based on their customer id number is considered
searching. Likewise, finding an invoice based on the date of the invoice and the
customer id number associated with that invoice is also considered a search
operation. Importantly, searching often results in changing which record is the
current record.
There is a somewhat similar operation that you can perform with FireDAC
datasets called filtering. Filtering, which shares some characteristics with
searching, involves selecting subsets of the records in a dataset based on the
data. In other words, while searching may change the current record, filtering
often results in a change in the number of records that are available within the
dataset. As a result, I decided to cover filtering separately in Chapter 9,
Filtering Data.
Code: The FDSearch project is available from the code download. See
Appendix A for details.
In order to compare the speed of the various search options, we need to have
some means of measuring the speed of searching. As demonstrated in the
preceding chapter, I do this using the TStopWatch record and two custom
methods: Start and Complete.
198 Delphi in Depth: FireDAC
Start initiates a stop watch, and Complete captures the elapsed time and displays
the results in the status bar, as shown in Figure 8-1. In this figure, a search
operation has been completed, and information about the search speed appears
in the StatusBar.
Before discussing the individual search mechanisms, I also want to mention that
we are once again using the BigMemTable.xml FDMemTable file, just as we
did in the preceding chapter. As you may recall, this saved FDMemTable file
contains almost 25,000 records, making it a good candidate for comparing the
speed of the various search mechanisms.
Because this project uses an FDMemTable, search operations on the data are
very fast, since all of the data is in memory. If we used an FDQuery, the data
Chapter 8: Searching Data 199
may or may not be all in memory, depending on how we loaded the data. As a
result, searching operations may be slower than those we will see in this project.
In short, if you ensure that an FDQuery has loaded its entire result set into
memory, search operations should be optimized. You use the
FetchOptions.Mode property to control whether FireDAC datasets load all of
their data at once, or on an as-needed basis.
Simple Record-by-Record Searches
The simplest, and typically slowest, mechanism for searching is performed by
scanning. As I described in Chapter 6, Navigating and Editing Data, you can
scan a table by moving to either the first or last record in the current index order,
and then navigating record-by-record. When you use scanning for a search
operation, you read each record programmatically as you search, comparing the
record's data to the search criteria. When you find a record that contains the data
you are looking for, you stop scanning.
An example of a scan-based search can be found on the OnClick event handler
associated with the button labeled Start Scan in the FDSearch project. In fact,
Figure 8-1 depicts the results of a search operation where a record-by-record
scan located the first record in which the name Waters appears in the LastName
field.
finally
FDMemTable1.EnableControls;
end;
if Found then StatusBar1.panels[1].Text :=
ScanForEdit.Text + ' found at record '
+ IntToStr(FDMemTable1.RecNo)
else
StatusBar1.panels[1].Text :=
ScanForEdit.Text + ' not found';
end;
As you can see from the StatusBar shown in Figure 8-1, the last name Waters
was found in record number 23,957, and this search took approximately 75
milliseconds. That's pretty remarkable, if you think about it. Specifically, the
code in the preceding event handler evaluated almost 24,000 records in under a
tenth of a second before finding a match. And of course, if there were many
more records in the dataset, and the record you were searching for was much
later in the dataset, the search operation would take longer.
While the scan appears to be very fast, it is nearly always the slowest search you
can perform. Many of the remaining search mechanisms discussed in this
chapter make use of indexes to perform the search, and this speeds things up
significantly.
Note: The scan executed in the preceding code was performed with updates
disabled. If we had not invoked the DisableUpdates method on the
FDMemTable before scanning, the operation would have taken a significantly
longer time.
IndexOnComboBox.Text;
end;
The following single line populates this combobox. This line appears in the
AfterOpen event handler of FDMemTable1 (which must be active in order for
this code to perform its magic):
FDMemTable1.Fields.GetFieldNames( IndexOnComboBox.Items );
The use of FindKey is demonstrated by the code that appears in the OnClick
event handler for the button labeled FindKey, shown in the following code
segment:
Figure 8-2 shows how the main form of the FDSearch project might look
following the use of FindKey to locate the first customer whose last name is
Waters. As you can see in this figure, the search took less than a millisecond,
even though the located record was close to the end of the 25,000 record table.
FindKey and FindNearest are identical in syntax. There is, however, a very big
difference in what they do. FindKey is a Boolean function method that returns
True if a matching record is located. In that case, the cursor is repositioned in
the dataset to the found record, which, if there is more than one match, is the
first match that is located based on the current index order. If FindKey fails, it
returns False and the cursor remains on the current record.
Chapter 8: Searching Data 203
Figure 8-2: FindKey is used to search for the first customer named Waters.
This search, which is index-based, is much faster than a scan
IntToStr(FDMemTable1.RecNo);
end;
Figure 8-3: FindNearest always succeeds, even if it does not find an exact
match
GOING TO DATA
GotoKey and GotoNearest provide the same searching features as FindKey and
FindNearest, respectively. The only difference between these two sets of
methods is how you define your search criteria. As you have already learned,
FindKey and FindNearest are passed a constant array as a parameter, and the
search criteria are contained in this array.
Both GotoKey and GotoNearest take no parameters. Instead, their search criteria
are defined using the search key buffer. The search key buffer contains one field
Chapter 8: Searching Data 205
for each field in the current index. For example, if the current index is based on
the field LastName, the search key buffer contains one field: LastName. On the
other hand, if the current index contains three fields, the search key buffer also
contains three fields.
Just as you do not have to define data for each field in the current index when
using FindKey and FindNearest, you do not have to define data for each field in
the search key buffer. However, those fields that you do define must be
associated with the left-most fields in the index definition. For example, if your
index is based on LastName and FirstName, you can use the search key buffer
to define only the LastName in the search key buffer, or both LastName and
FirstName. Using this same index, you cannot define only the FirstName in the
search key buffer.
Once the search key buffer has been populated with the values that you want to
search for, you call GotoKey or GotoNearest. At this point, these methods
perform the same search, with the same results, as FindKey and FindNearest,
respectively.
Fields in the search key buffer can only be modified when the dataset is in a
special state called the dsSetKey state. You call the dataset’s SetKey method to
clear the search key buffer and enter the dsSetKey state. If you have previously
assigned one or more values to the search key buffer, you can enter the
dsSetKey state without clearing the search key buffer's contents by calling the
dataset’s EditKey method.
Once the dataset is in the dsSetKey state, you assign data to Fields in the search
key buffer as if you were assigning data to the dataset’s Fields. For example,
assuming that the current index is based on the LastName and FirstName fields,
the following lines of code assign the value Selman to the LastName field of the
search key buffer, and the value Minnie to the FirstName field of the search key
buffer:
FDQuery1.SetKey;
FDQuery1.FieldByName('LastName').Value := 'Selman';
FDQuery1.FieldByName('FirstName').Value := 'Minnie';
FDQuery1.FindKey(['Selman', 'Minnie']);
Achieving the same result using GotoKey requires four lines of code since you
must first enter the dsSetKey state and edit the search key buffer. The following
lines of code, which uses GotoKey, perform precisely the same search as the
preceding line of code:
FDQuery1.SetKey;
FDQuery1.FieldByName('FirstName').Value := 'Minnie';
FDQuery1.FieldByName('LastName').Value := 'Selman';
FDQuery1.GotoKey;
The following event handlers are associated with the buttons labeled Goto Key
and Goto Nearest in the FDSearch project:
end;
Since GotoKey and GotoNearest perform essentially the same tasks as FindKey
and FindNearest, though in a more verbose syntax, you might wonder why
anyone would use these methods when FindKey and FindNearest are available.
There is an answer, and it has to do with EditKey.
EditKey is a method that places the dataset in the dsSetKey state, but without
clearing the search key buffer. As a result, EditKey permits you to change a
single value or a subset of values in the search key buffer without affecting
those values you do not want to change. As a result, there are times when
GotoKey provides you with a more convenient way to define and change your
search criteria. You may never need GotoKey or GotoNearest, but if you do,
you'll be glad that these options exist.
Searching with Variants
FireDAC datasets, like most TDataSets, provide two additional searching
mechanisms, and these involve the use of variants. Unlike FindKey,
FindNearest, and their Goto counterparts, these variant-using search
mechanisms do not require an index.
For those search mechanisms that require an index, it means that in order to use
them, you must first set the index. While this might not sound like a big deal,
setting an index on a FireDAC dataset has the side effect of changing the sort
order of the records. While you can sidestep the potential change of record order
in the FireDAC dataset if you employ a cloned cursor, that too requires
additional code. I discuss using cloned cursors in detail in Chapter 13, More
FDMemTables: Cloned Cursors and Nested DataSets.
208 Delphi in Depth: FireDAC
LOCATING DATA
Locate, like FindKey and GotoKey, makes the located record the current record
if a match is found. In addition, Locate is a function method and returns a
Boolean True if the search produces a match. Lookup is somewhat different in
that it returns requested data from a located record, but never moves the current
record pointer. Lookup is described separately later in this chapter.
What makes Locate and Lookup so special is that they do not require you to
create or switch indexes, but still provide much faster performance than
scanning. In a number of tests that I have conducted, Locate is always faster
than scanning, but generally slower than FindKey. Figure 8-4 displays a
representative search, in this case, searching for a customer named Waters.
At first glance, Locate looks pretty good, as the preceding search results are
close to that of the FindKey search shown in Figure 8-2, and significantly faster
than scanning. However, upon closer inspection, a wrinkle appears. If you
compare Figure 8-4 with Figure 8-1 and Figure 8-2, you will notice that they did
not locate the same record. Yes, they located a record with the last name,
Waters, but it is not the same record.
Another aspect that is not obvious, unless you spend some time testing various
uses of Locate, is that the speed of the search is dependent on the current order
of records in the FDMemTable. For example, in Figure 8-4 the current index is
based on the GUID field. If a different index was used, and there is more than
one entry where the last name is Waters, a different record may have been
located.
Chapter 8: Searching Data 209
Figure 8-4: A customer with the last name Waters is found using Locate
However, if you were to select the LastName index, where Waters is much later
in the order, the Duration measure would likely increase. Should you care? That
depends. Locate is fast — FindKey is faster, but Locate and Lookup do not
require an index switch. On the other hand, FindKey is fastest, though it requires
an index switch or some cloned cursor mojo. But I think that you get what I'm
trying to say. So let's learn how to use Locate.
Locate has the following syntax:
If you are locating a record based on a single field, the first argument is the
name of that field and the second argument is the value you are searching for.
To search on more than one field, pass a semicolon-separated string of field
names in the first argument, and a variant array containing the search values
corresponding to the field list in the second argument.
210 Delphi in Depth: FireDAC
The third argument of Locate is a TLocateOptions set. This set can contain zero
or more of the following flags: loCaseInsensitive and loPartialKey. Include
loCaseInsensitive to ignore case in your search and loPartialKey to match any
value that begins with the values you pass in the second argument.
If the search is successful, Locate makes the located record the current record
and returns a value of True. If the search is not successful, Locate returns False
and the cursor does not move.
Imagine that you want to find a customer with the last name Waters. This can be
accomplished with the following statement:
FDQuery1.Locate('LastName', 'Waters',[]);
The following is an example of a partial match, searching for a record where the
LastName field begins with the letter W or w.
Searching for two or more fields is more complicated in that you must pass the
search values using a variant array. The following lines of code demonstrate
how you can search for a record where the FirstName field contains Minnie and
the LastName field contains Selman:
var
SearchList: Variant;
begin
SearchList := VarArrayCreate([0, 1], VarVariant);
SearchList[0] := 'Minnie';
SearchList[1] := 'Selman';
FDMemTable1.Locate('FirstName;LastName',
SearchList, [loCaseInsensitive]);
array size at runtime. The following code performs the same search as the
preceding code, but makes use of an array created using VarArrayOf:
var
SearchList: Variant;
begin
SearchList := VarArrayOf(['Minnie','Selman']);
FDMemTable1.Locate('FirstName;LastName',SearchList,
[loCaseInsensitive]);
If you refer back to the FDSearch project main form shown in the earlier figures
in this section, you will notice a StringGrid in the upper-right corner. Data
entered into the first two columns of this grid are used to create the KeyFields
and KeyValues arguments of Locate, respectively. The following methods,
found in the FDSearch project, generate these parameters:
var
i: Integer;
begin
Result := VarArrayCreate([0,Pred(Size)], VarVariant);
for i := 0 to Pred(Size) do
Result[i] := StringGrid1.Cells[SearchColumn, Succ(i)];
end;
The following code is associated with the OnClick event handler of the button
labeled Locate in the FDSearch project. As you can see in this code, the Locate
method is invoked based on the values returned by calling GetKeyFields and
GetKeyValues:
Note: Instead of passing a constant array in the call to VarArrayOf, you might
be able to get away with passing an array of TVarRec. Using an array of
TVarRec in place of an array of const is demonstrated in Chapter 9, Filtering
Data, in the discussion of SetRange.
USING LOOKUP
Lookup is similar in many respects to Locate, with one very important
difference. Instead of moving the current record pointer to the located record,
Lookup returns a variant containing data from the located record without
moving the current record pointer. The following is the syntax of Lookup:
Result := StringGrid1.Cells[ReturnColumn, i]
else
Result := Result + ';' +
StringGrid1.Cells[ReturnColumn, i]
else
Break;
end;
The following is the code associated with the OnClick event handler of the
button labeled Lookup:
Figure 8-5 shows the main form of the FDSearch project following a call to
Locate. Notice that the current record is still the first record in the FDMemTable
(as indicated by the DBNavigator buttons), even though the data returned from
the call to Locate was found much later in the current index order.
Note: Even though Lookup does not cause the current record to change, if the
current record is in edit mode, a call to Lookup will cause that record to post, or
raise an exception if posting fails.
Figure 8-5: Lookup is a relatively high-speed way to get data from a record
without changing the current record of a dataset
In the next chapter, you will learn how to filter FireDAC datasets.
Chapter 9: Filtering Data 217
Chapter 9
Filtering Data
When you filter a dataset, you restrict access to a subset of records contained in
the dataset. For example, imagine that you have an FDQuery that includes one
record for each of your company's customers, worldwide. Without filtering, all
customer records are accessible in the result set. That is, it is possible to
navigate, view, and edit any customer in the dataset.
Through filtering, you can make the FDQuery appear to include only those
customers who live in the United States or in London, England, or who live on a
street named 6th Avenue. This example, of course, assumes that there is a field
in the dataset that contains country names, or fields containing City and Country
names, or a field holding street names. In other words, a filter limits which
records in a dataset are accessible based on data that is stored in the dataset.
While a filter is similar to a search, it is also different in a number of significant
ways. For example, when you apply a filter, it is possible that the current record
in the dataset will change. This will happen if the record that was current before
the filter was applied no longer exists in the filtered dataset. When performing a
search, the current record may change as well, specifically if the record you are
searching for is not the current record. However, the record that was the current
record prior to the search operation is still accessible in the dataset.
Another difference is that a search operation never changes the number of
records in the FireDAC dataset, as reflected by the dataset's RecordCount
property. By comparison, if at least one record in the dataset does not match the
filter criteria, RecordCount will be lower following the application of the filter,
and will change again when the filter is dropped. (Recall that RecordCount is
also affected by the FetchOptions.RecordCountMode, as well as the
FetchOptions.LiveWindowParanoic properties.)
Filters
A FireDAC datasets supports two fundamentally different mechanisms for
creating filters. The first of these involves a range, which is an index-based
218 Delphi in Depth: FireDAC
filtering mechanism. The second, called a filter, is more flexible than a range,
but is slower to apply and cancel. Both of these approaches to filtering are
covered in the following sections.
But before addressing filtering directly, there are a couple of additional points
that I need to make. The first is that filtering is a client-side operation.
Specifically, the filters discussed here are applied to the data that is loaded into
the dataset. For example, you may load 10,000 records into an FDMemTable
(every customer record, for instance), and then apply a filter that limits access to
only those customers located in Philadelphia.
Once applied, the filter may make the dataset appear to contain only 300 records
(assuming that 300 of your customers are located in Philadelphia). Although the
filtered dataset provides access only to these 300 records (and the RecordCount
property returns 300), all 10,000 records remain in the dataset. In other words, a
filter does not reduce the overhead of your dataset — it simply restricts your
access to a subset of the dataset's records, those that match the filter criteria.
The second point is that instead of using a filter on the data loaded into your
FireDAC dataset, you may be better off limiting how many records are available
in your result set. Consider the preceding example where a query might return
10,000 customer records. Instead of loading all 10,000 records, it might be
better to load only those customer records associated with customers who live in
Philadelphia. Specifically, you might want to use a WHERE clause predicate
that limits the result set records to those associated with Philadelphia-based
customers. For example, consider a query similar to the following:
While the preceding query seems rather limiting in that it only allows the
selection of customers from Philadelphia, a better approach would be to define
the query using a parameter, similar to the SQL shown here:
With this query in play, and with a single FDParam named City defined in the
FDQuery's Params property, you could use code similar to the following to
allow the end user to select into the dataset customer records from any city:
Chapter 9: Filtering Data 219
var
CityName: String;
begin
if InputQuery('Select Customers from which City',
'City', CityName) then
begin
FDQuery1.SQL.Text := 'SELECT * FROM Customer ' +
' WHERE City = :c;';
FDQuery1.Params[0].AsString := CityName;
FDQuery1.Open;
end;
end;
Important!: The use of a parameter to include a value entered by the end user to
be employed in the predicate part of the SQL statement is very important in
order to avoid a vulnerability known as an SQL injection hack. For more
information about SQL injection, refer to the section Prevention of SQL
Injection in Chapter 5, More Data Access.
Nonetheless, from the perspective of this discussion, these techniques are not
technically dataset filtering since they do not limit access within the dataset to a
subset of its loaded records.
So, when do you use filtering as opposed to loading only selected records into a
FireDAC dataset? The answer boils down to four issues: bandwidth, the source
of data, the amount of data, and client-side features.
When loading data from a DataSnap server, bandwidth is a concern. In
distributed applications like these (we are talking DataSnap here), it is usually
best to load only selected records when bandwidth is low. In this situation,
loading records that are not going to be displayed would consume bandwidth
unnecessarily, affecting the performance of your application as well as that of
others that share the bandwidth. On the other hand, if bandwidth is plentiful and
the entire result set is relatively small, it is often easier to load all data and filter
on those records that you want displayed.
The second consideration is data location. If you are loading data from a
previously saved FireDAC dataset (using LoadFromFile or LoadFromStream),
you have no choice. Filtering is the only option for showing just a subset of
records. Only when you are loading data through a query or stored procedure
call do you have a choice between using a filter and selectively loading the data.
See Chapter 17, Understanding Local SQL for information on querying data
loaded through a call to LoadFromFile or LoadFromStream.
220 Delphi in Depth: FireDAC
The third consideration is the amount of data. If your SQL queries a database
table that has a very large amount of data, it may not even be possible to load all
of that data into the FDQuery. For example, if the query returns millions of
records, or contains a number of large BLOB (Binary Large OBject) fields, it
may not be possible to load all of that data into memory at the same time. In
these cases, you must use some technique, such as using a well-considered
WHERE clause, to load only that data you need or can handle.
The final consideration is related to client-side features, the most common of
which is speed. Once data is loaded into memory in the client, most filters are
applied very quickly, even when a large amount of data needs to be filtered. As
a result, filtering permits you to rapidly alter which subset of records are
displayed. A simple click of a button or a menu selection can almost instantly
switch your dataset from displaying customers from Philadelphia to displaying
customers from Dallas, without a network roundtrip.
The use of filters is demonstrated in the FDFilter project. The main form for this
project is shown in Figure 9-1.
Chapter 9: Filtering Data 221
Code: The code project FDFilter is available from the code download. See
Appendix A for details.
Several types of filters are demonstrated in the FDFilter project. This project
also makes use of BigMemTable.xml, a saved FDMemTable file that contains
almost 25,000 records. This file, which is assumed to be located in a directory
named BigMemTable in a folder parallel to the one in which the FDFilter
project is located, is loaded from the OnCreate event handler of the main form.
This event handler will also display an error message if BigMemTable.xml
cannot be located.
222 Delphi in Depth: FireDAC
As mentioned earlier, there are two basic approaches to filtering: ranges and
filters. Let's start by looking at ranges.
Using a Range
Ranges, although less flexible than filters, provide the fastest option for
displaying a subset of records from a FireDAC dataset. In short, a range is an
index-based mechanism for defining the low and high values of records to be
displayed in the dataset. For example, if the current index is based on the
customer's last name, a range can be used to display all customers whose last
name is Jones. Or, a range can be used to display only customers whose last
name begins with the letter J. Similarly, if a FireDAC dataset is indexed on an
integer field called Credit Limit, a range can be used to display only those
customers whose credit limit is greater than (USD) $1,000, or between $0 and
$1000.
SETTING RANGES
Setting ranges with a FireDAC dataset bears a strong resemblance to the index-
based search mechanisms covered in Chapter 8, Searching Data. First, both of
these mechanisms require that you set an index, and that index can be a
temporary index or FDIndex-based. The range, like the search, is then based on
one or more fields in the current index. A second similarity between ranges and
index-based searches is that there are two ways to set a range. One technique
involves a single method call, similar to FindKey, while the other employs the
set key buffer, like GotoKey.
Let's start by looking at the easiest mechanism for setting a range — the
SetRange method. SetRange defines a range using a single method invocation.
In FireDAC, SetRange has the following syntax:
As you can see from this syntax, at a minimum you pass two arrays of const
when you call SetRange. The first array contains the low values of the range for
the fields of the index, with the first element in the array being the low value of
the range for the first field in the index, the second element being the low value
of the range for the second field in the index, and so on. The second array
Chapter 9: Filtering Data 223
contains the high end values for the index fields, with the first element in the
second array being the high end value of the range on the first field of the index,
the second element being the high value of the range on the second field of the
index, and so forth. These arrays can contain fewer elements than the number of
fields in the current index, but cannot contain more.
The SetRange call includes two optional parameters. Pass a Boolean value in the
third parameter to control whether the lower end values provided in the first
parameter are included in the range or not. If you pass False, the low end of the
range will be included in the range. Pass True to exclude the lower end values
from the Range. The fourth parameter works the same, but with respect to the
upper end parameters. The default values for these parameters is False, in which
case the range is inclusive.
Consider again our example of an FDQuery that returns all customer records.
Given that there is a field in this result set named City, and you want to display
only records for customers who live in Pleasantville, you can use the following
statements:
FDQuery1.IndexFieldNames := 'City';
FDQuery1.SetRange(['Pleasantville'], ['Pleasantville']);
The first statement creates a temporary index on the City field, while the second
sets the range. Of course, if the dataset is already using an index where the first
field of the index is the City field, you can omit the first line in the preceding
code segment.
Now, consider this example:
FDQuery1.IndexFieldNames := 'City';
FDQuery1.SetRange(['Pleasantville'], ['Pleasantville'], True, True);
In this case, we have chosen to exclude both the low end and the high end
values. Since we were filtering only on a single value, the dataset would display
no records, since records whose City field contains Pleasantville will be
excluded. What this example shows is that AStartExclusive and AEndExclusive
are only meaningful when your range includes more than one value.
The preceding example sets the range on a single field, but it is often possible to
set a range on two or more fields of the current index. For example, imagine that
224 Delphi in Depth: FireDAC
you want to display only those customers whose last name is Waters and who
live in Pleasantville, New York. The following statements show you how:
FDMemTable1.IndexFieldNames := 'LastName;City;State';
FDMemTable1.SetRange(['Waters', 'Pleasantville', 'NY'],
['Waters', 'Pleasantville', 'NY']);
Each of these examples sets the range to a single value in all fields, and in both
cases, setting the third and fourth parameters to True would have produced a
record count of zero. It is sometimes possible to set a range that includes a range
of values. For example, imagine that you want to find all customers who live in
California and in a city whose name begins with A. You could achieve this with
the following statements:
FDMemTable1.IndexFieldNames := 'State;City';
FDMemTable1.SetRange(['CA', 'Aa'],
['CA', 'Az']);
Fortunately, the Delphi language has improved significantly over the years, and
now instead of passing an array of const, we have the option of using an array of
TVarRec. As a result, it is easier to create code that can accommodate a variable
number of range elements. This can be seen in the OnClick event handler for the
button labeled Set Range. This event handler is shown here:
226 Delphi in Depth: FireDAC
Figure 9-2 shows how the FDFilter project looks after a range has been set on
two fields. This range displays all records associated with people from Des
Moines whose last name ranges from Masters to Spencer.
Chapter 9: Filtering Data 227
Figure 9-2: A range on two fields has been applied to the dataset
Now see what happens when you use the exclusive parameters. Figure 9-3
shows the same filter, except that this time both the Start Exclusive and End
Exclusive checkboxes have been checked. The subsequent call to SetRange
passed True in the third and fourth parameters, and as a result, the names
Masters and Spencer are no longer in the range.
228 Delphi in Depth: FireDAC
Figure 9-3: Start Exclusive and End Exclusive have resulted in the starting
and ending values of the range being omitted from the range
USING APPLYRANGE
ApplyRange is the range equivalent to the GotoKey search. To use ApplyRange,
you begin by calling SetRangeStart (or EditRangeStart). Doing so places the
dataset in the dsSetKey state. While in this state, you assign values to one or
more of the Fields involved in the current index to define the low values of the
range. As is the case with SetRange, if you define a single low value, it will be
used to define the low end of the range on the first field of the current index. If
you define the low values of the range for two fields, they must necessarily be
the first two fields of the index.
After setting the low range values, you call SetRangeEnd (or EditRangeEnd).
You now assign values to one or more fields of the current index to define the
high values for the range. Once both the low values and high values of the range
have been set, you call ApplyRange to filter the dataset on the defined range.
Chapter 9: Filtering Data 229
FDQuery1.IndexFieldNames := 'City';
FDQuery1.SetRangeStart;
FDQuery1.FieldByName('City').Value := 'Philadelphia';
FDQuery1.SetRangeEnd;
FDQuery1.FieldByName('City').Value := ' Philadelphia ';
FDQuery1.ApplyRange;
Just like SetRange, ApplyRange can be used to set a range on more than one
field of the index, as shown in the following example:
FDQuery1.IndexFieldNames := 'LastName;City;State';
FDQuery1.SetRangeStart;
FDQuery1.FieldByName('LastName').Value := 'Waters';
FDQuery1.FieldByName('City').Value := 'Pleasantville';
FDQuery1.FieldByName('State').Value := 'NY';
FDQuery1.SetRangeEnd;
FDQuery1.FieldByName('LastName').Value := 'Waters';
FDQuery1.FieldByName('City').Value := 'Pleasantville';
FDQuery1.FieldByName('State').Value := 'NY';
FDQuery1.ApplyRange;
FDQuery1.IndexFieldNames := 'CreditLimit';
FDQuery1.SetRange([1000],[5000]);
230 Delphi in Depth: FireDAC
If you then want to change the range to between $1,000 and $10,000, you can do
so using the following statements:
FDQuery1.EditRangeEnd;
FDQuery1.FieldByName('CreditLimit').Value := 10000;
FDQuery1.ApplyRange;
At first glance, ApplyRange sounds like it is more verbose than SetRange, and
this is true when you know in advance how many fields on which your range is
based. Ironically, when you do not know how many fields you need to set a
range on in advance, and cannot use an array of TVarRec, ApplyRange can
actually be more concise.
Consider the OnClick event handler associated with the button labeled Apply
Range. This event handler, shown in the following code segment, performs the
same task as my original (circa 1996) call to Set Range, a partial segment of
which I showed you. As you can see, the next event handler is much shorter. It
can also handle any number of fields in the index without requiring more code:
FDMemTable1.FieldByName(StringGrid1.Cells[0,i]).Value :=
Trim(StringGrid1.Cells[2,i]);
Complete;
if MaxItems < 6 then
CancelRangeBtn.Enabled := True;
end;
CANCELING A RANGE
Whether you have created a range using SetRange or ApplyRange, you cancel
that range by calling the dataset's CancelRange method. Canceling a range is
demonstrated by the OnClick event handler associated with the button labeled
Cancel Range, as shown in the following code:
FDMemTable1.IndexFieldNames := 'City;CreditLimit';
FDMemTable1.SetRange(['Des Moines', 1000], ['Des Moines', 5000]);
232 Delphi in Depth: FireDAC
By comparison, the following statement will display all records for customers
whose credit limit is between $1,000 and $5,000, regardless of which city they
live in:
FDMemTable1.IndexFieldNames := 'CreditLimit;City';
FDMemTable1.SetRange([1000, 'Des Moines'], [5000, 'Des Moines']);
The difference between these two range examples is that in the first example,
the low and high value in the first field of the range is a constant value, Des
Moines. In the second, a range appears (1000-5000). Because a range of values,
instead of the same value, appears in the first field of the range, the second field
of the range is ignored.
The bottom line is this. If you want a range of values between a low value and a
high value to appear, it must be defined only in the last element of the array of
const parameters passed to SetRange, or defined in ApplyRange.
Using Filters
Because ranges rely on indexes, they are applied very quickly. For example,
using the BigMemTable.xml table with an index on the FirstName field, setting
a range to show only records for customers where the first name is Billy was
applied in less than a millisecond on my computer.
Filters, by comparison, do not use indexes. Instead, they operate by evaluating
the records of the FireDAC dataset, displaying only those records that pass the
filter. Since filters do not use indexes, they are not as fast. (Filtering on the first
name Billy took about 40 milliseconds in my tests.). On the other hand, filters
are much more flexible.
Note: You might recall from Chapter 7, Creating Indexes, that you can create a
Filter-based index. This discussion of a lack of performance using filters does
not apply to filter-based indexes, which use both filters and an index to produce
very fast filtered views.
FireDAC datasets, like most other TDataSet descendants, have four properties
that apply to filters. These are: Filter, Filtered, FilterOptions, and
OnFilterRecord (an event property). In its simplest case, a filter requires that
you use two of these properties: Filter and Filtered. Filtered is a Boolean
Chapter 9: Filtering Data 233
property that you use to turn filtering on and off. If you want to filter records, set
Filtered to True. Otherwise, set Filtered to False (the default value).
Basic Filters
When Filtered is set to True, the dataset uses the value of the Filter property to
identify which records will be displayed. You assign to this property a Boolean
expression containing at least one comparison operator involving at least one
field in the dataset. You can use any comparison operators, including =, >, <,
>=, <=, and <>. As long as the field name does not include any spaces, you
include the field name directly in the filter expression without delimiters. For
example, if your dataset includes a field named City, you can set the Filter
property to the following expression to display only customers living in Dayton
(when Filtered is True):
City = 'Dayton'
Note that the single quotes are required here, since Dayton is a string literal. If
you want to assign a value to the Filter property at runtime, you must include the
single quotes in the string that you assign to the property. The following is one
example of how to do this:
The preceding code segment used the QuotedStr function, which is located in
the System.SysUtils unit. The alternative is to use something like the following:
In the preceding examples, the field name of the field in the filter does not
include spaces. If one or more fields that you want to use in a filter include
spaces in their field names, or characters that would otherwise be interpreted to
mean something else, such as the > (greater than) symbol, enclose those field
names in square braces. (Square braces can also be used around field names that
do not include spaces or special characters.) For example, if your dataset
contains a field named 'Last Name,' you can use a statement similar to the
following to create a filter:
234 Delphi in Depth: FireDAC
Code: You can find the FDAdvancedFilters project in the code download.
In fact, there are very few OnClick event handlers used in this project. One of
these event handlers is assigned to every button on which the Caption property
is set to a valid filter. This event handler, named AssignFilter in this project,
casts the Sender parameter as a TButton instance, and then assigns the Caption
property of this TButton reference to the Filter property of the FDQuery. This
event handler is shown here:
The Button whose caption reads Clear Filter has an event handler that assigns an
empty string to the Filter property, which, even though the Filtered property is
set to True, has the effect of dropping the filter. This event handler is shown
here:
This project also includes an Edit, into which you can enter a filter expression,
after which you can click the accompanying button labeled Apply Filter. The
Apply Filter button assigns the Text property of the Edit to the FDQuery's Filter
property and, since the Filtered property of the FDQuery is already set to True,
results in filtering. (As with the buttons whose captions contain a valid filter
expression, you clear this filter by clicking the button labeled Clear Filter.) The
FDAdvancedFilters project main form is pretty large, so in order to make the
remaining figures in this chapter readable, I will use the Edit and its Apply Filter
button to demonstrate the filters.
NULL COMPARISONS
Null comparisons permit you to compare a dataset's fields to a null value. Unlike
the other comparison filter operators, these comparisons use a special syntax
that prevents the dataset from interpreting the keyword NULL as a field name.
Chapter 9: Filtering Data 237
The use of NULL comparisons are demonstrated by the two buttons that appear
under the label NULL comparisons.
For example, to filter the State field on records where the State field is a null
value, use the following filter expression:
State IS NULL
To include only records where the State field contains data, use the following:
Figure 9-5 shows the FDAdvancedFilters project after the State IS NULL filter
has been applied.
Figure 9-5: The State IS NULL filter expression has been applied
STRING FUNCTIONS
Delphi supports six string functions. These are Upper, Lower, SubString, Trim,
TrimRight, and TrimLeft.
Upper and Lower are the simplest of these to use. Upper converts a single string
expression to uppercase, and Lower converts a string expression to lowercase.
Here is an example of a string expression that converts the contents of the City
field to uppercase, after which it compares that value to 'KAPAA KAUAI':
The SubString function is more complicated, in that it can accept either two or
three arguments. When used with two arguments, the first argument is a string
expression (typically a string field reference) and the second is the position of
the first character from which to begin the comparison (1-based, not 0-based).
For example, the following expression compares the City field to the string
port, beginning with the fifth character of the City field. Records where the City
field contains the value Freeport match this filter:
SubString(City, 5) = 'port'
When three arguments are passed to SubString, the first is a string expression
(again, typically a string field), the second is the position within that string to
begin the comparison, and the third is the number of characters to compare. As a
result, the following filter expression will display only those records whose City
field contains the letter r in the second position. Figure 9-6 shows the
FDAdvancedFilters project filtering on this filter expression:
SubString(CITY, 2, 1) = 'r'
Chapter 9: Filtering Data 239
The Trim, TrimRight, and TrimLeft functions all take either one or two
arguments. When a single string argument is passed, these functions will
remove blank spaces from either the right side (TrimRight), the left side
(TrimLeft), or both left and right sides of the string argument (Trim). For
example, the following filter expression will trim white space (blank characters)
from both the right and left of the Company field, and compare the result to the
string Unisco:
Trim(Company) = 'Unisco'
When the optional second parameter is employed, you pass a single character in
the second argument, and it is that character (instead of a blank character) that is
trimmed. Consequently, the following filter expression limits the dataset to
displaying only records where the Company name field contains the value nisco
once all of the leading U characters have been trimmed off:
DATE/TIME FUNCTIONS
Delphi's date/time filter expression functions permit you to make comparisons
based on partial date values, such as a particular year, month, hour, or second. In
addition, they permit you to compare against the date part or time part of a
timestamp expression, and even compare using the current date.
240 Delphi in Depth: FireDAC
SELECT c.*,
{fn CONVERT(c.LastInvoiceDate, DATE) } AS LastInvDate,
{fn CONVERT(c.LastInvoiceDate, Time) } AS LastInvTime
FROM Customer c
Using these two new fields, named LastInvDate and LastInvTime, it is possible
to demonstrate the use of Delphi’s date/time filter expression functions.
The following filter expression will cause the dataset to display only those
records where the LastInvDate field contains dates from the year 1989:
Year(LastInvDate) = 1989
Figure 9-7: The Year function is used to filter on records from 1989
Chapter 9: Filtering Data 241
Similarly, the following filter expression causes Delphi to display records whose
LastInvTime value later in the day then 12:00 pm (noon):
Hour(LastInvTime) > 12
The Date() function returns the date portion of a date/time value, and Time()
returns the time portion. Also, GetDate() returns the current date. As a result, the
following filter will return all records whose InvoiceDueDate is more than 30
days old, and for which the InvoicePaid field is null (though no records
matching this filter exist in the Customer table of Delphi's dbdemos.gdb
InterBase database):
MISCELLANEOUS FUNCTIONS
There are two miscellaneous Delphi functions supported by filter expressions,
and these are LIKE and IN. The LIKE statement operates similar to how the
SQL-92 LIKE statement works, in that you compare a string expression with a
pattern that can include the % and _ wildcard characters. Within this pattern, the
% wildcard stands for zero or more characters, and the _ character stands for
exactly one character.
The following filter expression demonstrates a use of LIKE, and will search for
any Company name that includes the characters Dive:
Likewise, the following filter expression will display any record where the City
field ends with the characters port:
Of course, special filter expressions can be combined. Here is a filter that will
display only records where the letters 'shop' (in any combination of upper or
lowercase letters) appear, followed by exactly two more characters. (Note that
242 Delphi in Depth: FireDAC
we have had to insert a slight space between the two underscore characters in
the following expression for readability. Had we not done this, the printed copy
would appear to include a single underscore character.) The effect can be seen in
Figure 9-8:
Similar to LIKE, the IN keyword works similar to that found in SQL-92. When
using IN, you follow an expression with the keyword IN followed by a comma-
separated set of possible values enclosed in parentheses. The following filter
expression displays only companies whose mailing address is in CA
(California), FL (Florida), or HI (Hawaii):
If you want to use a backslash in a LIKE expression, you escape it with another
backslash. In other words, two consecutive backslashes filter on a single
backslash.
FIREDAC SCALAR FUNCTIONS
As I mentioned at the outset of this section on special filter expressions,
FireDAC adds its own collection of scalar functions that you can use in filters
(and other expressions in Delphi). There are many more functions available
through FireDAC’s scalar functions than offered from Delphi’s TDataSet
implementation, and you will find tables of these function in Chapter 14, The
SQL Command Preprocessor.
The one difference between using these functions in filter expressions and using
them in SQL statements, is that you do not enclose the functions in curly braces
when using them in filter expressions, but you must do so in SQL statements.
LENGTH() is a FireDAC scalar string function, and it returns the number of
characters in a string. Figure 9-9 shows the LENGTH() function being used in a
filter expression in the FDAdvancedFilters project. But let me remind you once
more, if you want to use these FireDAC scalar functions in filter expressions,
you must include the FireDAC.Stan.ExprFuncs unit in your uses clause.
The reason for this is that, due to case sensitivity, no records contain the string
dive. However, if you click the Case Insensitive checkbox, the
foCaseInsensitive flag gets added to the FilterOptions property, after which this
filter expression will include any company whose name contains the letters dive,
regardless of case. This can be seen in Figure 9-10.
whose contents begin with the characters to the left of the asterisk are included
in the filter. For example, consider the following filter:
Company = 'D*'
Figure 9-11: By default, the asterisk character (*) is a wildcard that can
appear at the end of a string comparison
corresponding record in another table. If, based on this test, you wish to exclude
the current record from the view, you set the value of the Accept formal
parameter to False. This parameter is True by default.
The Filter property normally consists of one or more comparisons involving
values in fields of the dataset. OnFilterRecord event handlers, however, can
include any comparison you want. And therein lies the danger. Specifically, if
the comparison that you perform in the OnFilterRecord event handler is time
consuming, the filter will be slow. In other words, you should try to optimize
any code that you place in an OnFilterRecord event handler, especially if you
need to filter a lot of records, since it is executed for each and every record in
the dataset (with an exception noted in the final section of this chapter, Using
Ranges and Filters Together).
The following is a simple example of an OnFilterRecord event handler:
This navigation, however, does not require that the Filtered property be set to
True. In other words, while all records of the dataset may be visible, the filter
can be used to quickly navigate among those records that match the filter.
When you execute the methods FindNext or FindPrior, the dataset sets a
property named Found. If Found is True, a next record or a prior record was
located, and is now the current record. If Found returns False, the attempt to
navigate failed. However, all of the filtered navigation methods are function
methods that return a Boolean True if the operation was successful. For
example, after setting the filter expression, a call to FindFirst or FindLast returns
False if no records match the filter expression.
The use of filtered navigation is demonstrated in the event handlers associated
with the buttons labeled First, Prior, Next, and Last in the FDFilter project
(shown earlier in this chapter). These event handlers are shown here:
Complete;
if not FDMemTable1.Found then
ShowMessage('No next record found');
end;
Notice that the NextBtnClick method uses the Found property of the dataset to
determine if the navigation was successful. By comparison, the PriorBtnClick
simply uses the return value of FindPrior to measure success. The values
returned by the calls to FindFirst and FindLast are also used in this code to
evaluate whether or not any records matched the expression entered into the
provided Edit.
Figure 9-12 shows how the FDFilter project looks after a filtered navigation.
After entering the filter expression FirstName = 'Scarlett' into the Edit, and
clicking the Apply Filter Expression button, the button labeled First was clicked.
This moved the cursor to record number 510. The Next button was then clicked,
and the record at position 838 became the current record. The navigation is
being performed on records where the first name field contains the value
Scarlett, even though all records from the FDMemTable are visible in the
DBGrid.
Chapter 9: Filtering Data 249
Figure 9-12: Filtered navigation uses the filter expression to navigate, even
when Filtered is set to False
This advantage of using ranges and filters together is especially useful when you
employ the OnFilterRecord event handler, which as pointed out earlier, can
negatively affect filter performance. By eliminating as many records initially
from a view by using a range, OnFilterRecord executes for a smaller number of
records, which in turn will improve the performance of the filter.
Figure 9-13: Data from two related tables are displayed in a master-detail
relationship
Code: The code project FDMasterDetail is available from the code download.
What’s especially interesting about the data displayed in Figure 9-13 is that this
association between the datasets is defined using properties. There is no custom
code being executed to produce and maintain the filtered view displayed in the
detail table. If you run this project and navigate to a new current record in the
Customer table the Sales table view will update automatically. This is what is
referred to as a dynamic master-detail relationship.
There are two ways in Delphi to define a dynamic master-detail relationship
between two datasets, and in this case, I am going to specifically describe how
this is done using FireDAC datasets. Other datasets also support these
techniques, but there may be minor differences in the specific properties that
you use.
The two approaches to defining dynamic master-detail relationships are referred
to as range-based and parameter-based. Range-base dynamic master-detail
relationships can be implemented by any type of dataset in the detail table, while
252 Delphi in Depth: FireDAC
You select MasterSource using the dropdown list on the MasterSource property
to select from the data-source objects in scope, and you enter the
IndexFieldNames property manually (or select from an appropriate FDIndex
that you have created). You can enter the value for the MasterFields property
manually, or you can invoke the property editor for this property. The
MasterFields property editor for FireDAC datasets is shown in Figure 9-14.
254 Delphi in Depth: FireDAC
Figure 9-14: The MasterFields property editor has been used to select the
primary key of the dataset to which the MasterSource property refers
master dataset. In the case of a relationship between the Customer and Sales
tables, this query looks like the following:
Since CUST_NO is the name of the Customer table’s primary key, the
preceding query includes a parameter named :cust_no. I have deliberately used
lowercase for the parameter name to emphasize that the parameter name is case
insensitive.
If you have defined the query prior to setting the MasterSource property of your
detail FDQuery, FireDAC will fill in the MasterFields property for you, but you
will still need to define an index based on the detail table’s foreign key field or
fields, which is done using IndexFieldNames in this case.
Figure 9-15 shows the running FDMasterDetail project with the Parameter-
Based tab selected. The two grids display the data from two FDQueries that are
configured using parameter-based dynamic master-detail properties.
Figure 9-15: The Sales query values are dynamically filtered based on a
parameterized query
256 Delphi in Depth: FireDAC
If you run this project, and navigate between records of either of the master
datasets, you will see the detail datasets dynamically filter to display only those
records associated with its current master record. And, this happens based on
properties alone. There is no custom code in the project that participates in this
operation, which you can see from the following, which shows the entire source
code for the main form unit:
unit mainformu;
interface
uses
Winapi.Windows, Winapi.Messages, System.SysUtils,
System.Variants, System.Classes, Vcl.Graphics,
Vcl.Controls, Vcl.Forms, Vcl.Dialogs, FireDAC.Stan.Intf,
FireDAC.Stan.Option, FireDAC.Stan.Param, FireDAC.Stan.Error,
FireDAC.DatS, FireDAC.Phys.Intf, FireDAC.DApt.Intf,
FireDAC.Stan.Async, FireDAC.DApt, Data.DB, Vcl.Grids,
Vcl.DBGrids, Vcl.ExtCtrls, Vcl.DBCtrls, FireDAC.Comp.DataSet,
FireDAC.Comp.Client, Vcl.ComCtrls;
type
TMainForm = class(TForm)
PageControl1: TPageControl;
TabSheet1: TTabSheet;
TabSheet2: TTabSheet;
CustomerQueryRB: TFDQuery;
SalesQueryRB: TFDQuery;
CustomerQueryPB: TFDQuery;
SalesQueryPB: TFDQuery;
DBNavigator1: TDBNavigator;
DBGrid1: TDBGrid;
DBNavigator2: TDBNavigator;
DBGrid2: TDBGrid;
DBNavigator3: TDBNavigator;
grdCustomerRB: TDBGrid;
DBNavigator4: TDBNavigator;
grdSalesRB: TDBGrid;
CustomerSourceRB: TDataSource;
CustomerSourcePB: TDataSource;
dscCustomerPB: TDataSource;
dscSalesPB: TDataSource;
Chapter 9: Filtering Data 257
dscCustomerRB: TDataSource;
dscSalesRB: TDataSource;
procedure FormCreate(Sender: TObject);
private
{ Private declarations }
public
{ Public declarations }
end;
var
MainForm: TMainForm;
implementation
{$R *.dfm}
uses SharedDMVclU;
end.
Note: You may have noticed that I have used a total of six data source objects in
this project, where two different data sources are pointing to each of the master
datasets. While I could have used a single data source to point to the master
datasets, and referenced it by both the detail dataset and the DBGrid, I used
two, since each of them serve a different purpose. When creating a dynamic
master-detail relationship, I like to use separate data sources for the
MasterSource property and the wiring of data-aware controls to the datasets,
since these two roles are distinctly different.
258 Delphi in Depth: FireDAC
Chapter 10
Creating and Using
Virtual Fields
I introduced the concept of fields in Chapter 6, Navigating and Editing Data. In
that discussion, I demonstrated that fields could be used to read from, and
sometimes write to, the columns returned in a dataset, as well as access
metadata about those columns. Those fields were a particular type of field,
referred to as a Data field. Data fields are always associated with the data
returned from query or stored procedure calls, or data loaded from a file or
stream, and there is always a one-to-one association between a particular field
and a column in the dataset.
There is another category of field, called virtual fields. Unlike Data fields,
virtual fields don’t necessarily derive directly from an operation against a
database or a file or stream. Instead, they result from some operation or
calculation. For example, they result from the execution of your code that
performs an assignment to the field, or from an operation that you have
configured in FireDAC to produce a summary calculation.
More importantly, the values of virtual fields are never written to the underlying
database. These values are virtual — they belong to the dataset in memory, but
are not part of a result set in the same way that Data fields are. (InternalCalc
fields, a type of Calculated field, are an exception, in that they can be persisted
to a file or stream. This is discussed later in this chapter, as well in Chapter 11,
Persisting Data.)
There are three general categories of virtual fields: Aggregate fields, Calculated
fields, and Lookup fields. We will explore virtual fields in this chapter along
with two related topics. Towards the end of this chapter, I discuss FieldOptions,
a new property introduced in Delphi XE6 that gives you more control over Data
fields and virtual fields. I also show you how to use the GroupState method,
which is closely associated with Aggregate fields, the first type of virtual field
that we will consider.
260 Delphi in Depth: FireDAC
Before I continue, I need to point out another way in which the various fields in
Delphi differ. Fields can also be categorized as being either dynamic fields or
persistent fields. Dynamic fields are always Data fields, meaning that they are
associated with a column of data referred to by a dataset. Dynamic fields are
created dynamically, at runtime, when the dataset becomes active, and one
dynamic field is created at runtime for each column in the corresponding result
set.
Persistent fields, by comparison, are created at design time, and persist from the
design-time environment to the runtime environment. Both Data and virtual
fields can be created as persistent fields, and when they are created, they appear
in the published section of your form, frame, or data module, like that seen in
the following partial type declaration:
type
TForm1 = class(TForm)
Panel1: TPanel;
DBGrid1: TDBGrid;
DBNavigator1: TDBNavigator;
DataSource1: TDataSource;
DataSource2: TDataSource;
FDQuery1: TFDQuery;
FDQuery2: TFDQuery;
FDConnection1: TFDConnection;
FDQuery1ORDERNO: TFloatField;
FDQuery1CUSTNO: TFloatField;
FDQuery1SALEDATE: TSQLTimeStampField;
FDQuery1SHIPDATE: TSQLTimeStampField;
FDQuery1EMPNO: TIntegerField;
FDQuery1SHIPTOCONTACT: TStringField;
FDQuery1SHIPTOADDR1: TStringField;
Aggregate Fields
An aggregate is an object that can automatically perform a simple descriptive
statistical calculation across one or more records in a dataset. For example,
Chapter 10: Virtual Fields 261
imagine that you have an FDQuery that contains a list of all purchases made by
your customers. If each record contains fields that identify the customer, the
number of items purchased, and the total value of the purchase, an aggregate can
calculate the sum of all purchases across all records in the table. Another
aggregate can calculate the average number of items purchased by each
customer, and a third aggregate can calculate the average cost of a given
customer's items.
FireDAC dataset aggregates support a total of five statistics. These are: Count,
minimum, maximum, sum, and average.
There are two types of objects that you can use to create aggregates: Aggregates
and AggregateFields. An Aggregate is a CollectionItem descendant, and an
AggregateField is a descendant of the TField class.
While these two aggregate types are similar in how you configure them, they
differ in their use. Specifically, an AggregateField, because it is a TField
descendant, can be associated with data-aware controls (and LiveBindings),
permitting the aggregated value to be displayed automatically. By comparison,
an Aggregate is an object whose value must be explicitly read at runtime.
One characteristic shared by both types of aggregates is that they require quite a
few specific steps to configure them. If you have never used aggregation in the
past, be patient. If your aggregates do not appear to work at first, you probably
missed one or more steps in their configuration. However, after you get
comfortable configuring aggregates, you will find that they are relatively easy to
use.
Because AggregateField instances are somewhat easier to use, I will consider
them first. Using Aggregates is discussed later in this chapter.
Note: Before the introduction of FireDAC, ClientDataSets where the only other
TDataSet included with Delphi to support AggregateFields, Aggregates, and
GroupState. For more information on ClientDataSets, please refer to my last
book, Delphi in Depth: ClientDataSets, Second Edition, 2015.
2. Using the Data Explorer, expand the FireDAC node, and then the
InterBase node. Now expand the Employee node, and from there, the
Tables node. Drag the node for the Sales table and drop it onto the form.
Delphi should create a configured FDConnection on the form named
EmployeeConnection, as well as a configured FDQuery named
SalesTable.
3. Next, add to your main form a DBNavigator, a DBGrid, and a
DataSource.
4. Set the Align property of the DBNavigator to alTop, and the Align
property of the DBGrid to alClient.
5. Next, set the DataSource property of both the DBNavigator and the
DBGrid to DataSource1.
6. Now set the DataSet property of DataSource1 to SalesTable.
7. Test that everything is working fine by setting the SalesTable’s Active
property to True. Your form should look something like that shown in
Figure 10-1.
8. Delphi’s documentation suggests that you configure your Aggregate
field’s expressions while your FireDAC dataset is inactive, so set the
SalesTable’s Active property to False before continuing.
2. Right-click the Fields Editor and select New Field (or press Ctrl-N).
Delphi displays the New Field dialog box.
3. At Name, enter CustomerTotal and select the Aggregate radio button in
the Field type area. Your New Field dialog box should now look
something like that shown in Figure 10-2.
Chapter 10: Virtual Fields 265
Figure 10-2: A new virtual AggregateField is being defined in the New Field
Editor
4. Click OK to close the New Field dialog box. You will see the newly
added aggregate field in the Fields Editor, as shown here:
266 Delphi in Depth: FireDAC
Notice that the newly added CustomerTotal field appears in its own little
window at the bottom of the Fields Editor. All AggregateFields appear in this
window, which serves to separate AggregateFields from any other persistent
fields, virtual or Data. As mentioned earlier in this chapter, this distinction is
that the presence of persistent AggregateFields does not by default preclude the
automatic creation of dynamic Data fields.
SUM(TOTAL_VALUE)
The argument of the aggregate function can include two or more fields in an
expression, if you like. For example, if you have two fields in your table, one
named Quantity and the other named Price, you can use the following
expression:
SUM(Quantity * Price)
The expression can also include literals. For example, if the tax rate is 8.25%,
you can create an aggregate that calculates the total plus tax, using something
similar to this:
SUM(Total * 1.0825)
You can also set the Expression property to perform an operation on two
aggregate functions, as shown here:
MIN(SHIP_DATE) - MIN(ORDER_DATE)
Chapter 10: Virtual Fields 267
MAX(ShipDate) + 30
SUM(AVG(AmountPaid)) //invalid
Nor can you use an expression that contains a calculation between an aggregate
function and a field. For example, if Quantity is the name of a field, the
following expression is invalid (since an aggregate is performed across one or
more records, by definition, but the field reference applies to single records):
The persistent index whose name you assign to the IndexName property can
have more fields than the number of fields you want to group on. This is where
the aggregate’s GroupingLevel comes in. You set GroupingLevel to the number
of fields of the index that you want to treat as a group.
The following steps walk you through the process of creating a persistent index
on the CUST_NO field, and then setting the AggregateField to use this index
with a grouping level of 1:
1. Select the SalesTable in the Object Inspector and select its Indexes
property. Click the ellipsis button of the Indexes property to display the
FDIndex collection editor.
2. Click the Add New button in the FDIndex collection editor toolbar to
add a new persistent index.
3. With the new index selected in the collection editor, use the Object
Inspector to set the Name property of this index to CustIdx, its Fields
property to CUST_NO, and it Active property to True. You can now
close the FDIndex collection editor.
4. With SalesTable still selected, set its IndexName property to CustIdx.
5. Next, using the Fields Editor, once again select the AggregateField.
Since we’ve already set the IndexName property of the FDQuery to
CustIdx, we do not have to touch the aggregate field’s index property.
Instead, set the GroupingLevel property to 1. You might also want to set
the Alignment property of this field to taRightJustified, as it is a
currency value, and this will make it look better.
MAKING THE AGGREGATE FIELD AVAILABLE
The AggregateField is almost ready. In order for it to work, however, you must
set the AggregateField's Active property to True. In addition, you must set the
FireDAC dataset’s AggregatesActive property to True. After doing this, the
aggregate will automatically be calculated when the dataset is made active.
With AggregateFields, there is one more step than required with Aggregates,
which is to associate the AggregateField with a data-aware control or
LiveBinding (if you want to display the data in the user interface).
The following steps demonstrate how to activate the AggregateField, as well as
make it visible in the DBGrid:
1. With the AggregateField selected in the Object Inspector, set its Active
property to True.
270 Delphi in Depth: FireDAC
6. With this new Column selected, use the Object Inspector to set its
FieldName property to CustomerTotal. Setting this property also has the
side effect of changing the name of the new Column in the Columns
collection editor. Next, change the position of this Column in the
Columns collection editor by dragging it to the fourth position,
immediately below the SALES_REP Column, as shown here:
Chapter 10: Virtual Fields 271
If you followed all of these steps correctly, your newly added AggregateField
should be visible in the fourth column of your DBGrid, as shown in Figure 10-4.
2. Click the Add New button twice on the Aggregates collection editor's
toolbar to add two aggregates to your dataset.
3. Select the first Aggregate in the Aggregates collection editor. Using the
Object Inspector, set the aggregate's Expression property to
AVG(TOTAL_VALUE), its AggregateName property to CustAvg, its
IndexName property to CustIdx, its GroupingLevel property to 1, and its
Active property to True.
4. Select the second Aggregate in the Aggregates collection editor. Using
the Object Inspector, set its Expression property to
MIN(ORDER_DATE), its AggregateName property to FirstOrder, its
IndexName property to CustIdx, its GroupingLevel property to 1, and its
Active property to True.
5. You can now set the FDQuery’s Active property to True.
6. Add a PopupMenu from the Standard page of the Tool Palette to your
project. Using the Menu Designer (double-click the PopupMenu to
display this editor), add a single MenuItem, setting its caption to "About
this customer."
7. Set the PopupMenu property of the DBGrid to PopUpMenu1.
8. Finally, add the following event handler to the Add this customer
MenuItem:
If you now run this project, you can see the values calculated by the Aggregates
collection items by right-clicking a record and selecting About this customer.
The displayed dialog box should look something like that shown in Figure 10-5.
When you call GetGroupState, you pass an integer indicating grouping level.
Passing a value of 0 (zero) to GetGroupState will return information about the
current record's relative position within the entire dataset. Passing a value of 1
will return the current record's group state with respect to the first field of the
Chapter 10: Virtual Fields 275
current index, passing a value of 2 will return the current record's group state
with respect to the first two fields of the current index, and so on.
GetGroupState returns a set of TGroupPosInd flags. TGroupPosInd is declared
as follows:
As should be obvious, if the current record is the first record in the group,
GetGroupState will return a set containing the gbFirst flag. If the record is the
last record in the group, this set will contain gbLast. When GetGroupState is
called for a record somewhere in the middle of a group, the gbMiddle flag is
returned. Finally, if the current record is the only record in the group,
GetGroupState returns a set containing both the gbFirst and gbLast flags.
GetGroupState can be particularly useful for suppressing redundant information
when displaying a dataset's data in a multi-record view, like that provided by the
DBGrid component. For example, consider the preceding figure of the main
form, Figure 10-5. Notice that the CustomerTotal AggregateField value is
displayed for each and every record, even though it is being calculated on a
customer-by-customer basis. Not only is the redundant aggregate data
unnecessary, it makes reading the data more difficult. This is a classic signal-to-
noise ratio problem.
Using GetGroupState, you can test whether or not a particular record is the last
record for the group, and if so, display the value for the CustomerTotal field.
For records that are not the last record in their group (based on the CustIdx
index), you can suppress the display of the data. Displaying total amounts for a
customer in the last record for that customer serves to emphasize the fact that
the calculation is based on all of the customer's records.
Determining group state and suppressing or displaying the data can be achieved
by adding an OnGetText event handler to the CustomerTotal AggregateField.
The following is the OnGetText event handler for the FDQuery1CustomerTotal
AggregateField in the FDAggregatesAndGroupState project:
else
Text := '';
end;
If you think about it, you might also want to suppress the CustNo field for all
but the first records in the group. This event handler can be associated with the
CustNo field at design time, but only if you have added persistent data fields for
all of the fields that you want displayed in the DBGrid. If you want to use
dynamic fields with your DBGrid, you will need to hook the OnGetText event
handler to the dynamic CustNo field at runtime.
The following is the OnGetText event handler associated with the persistent
CustomerTableCustNo data field in the FDAggregatesAndGroupState project:
If you want to hook this event handler to a dynamic CustNo field at runtime,
you can do this using something similar to the following from the OnCreate
event handler of your form:
SalesTable.FieldByName('CUST_NO').OnGetText :=
CustNoGetText;
SalesTable.DisableControls;
SalesTable.Close;
try
with TAggregateField.Create(Self) do
begin
FieldName := 'NumberOfSales';
Expression := 'COUNT(CUST_NO)';
IndexName := 'CustIdx';
GroupingLevel := 1;
Active := True;
Visible := True;
Name := 'SalesTableNumberOfSales';
DataSet := SalesTable;
end;
with DBGrid1.Columns.Add do
begin
FieldName := 'NumberOfSales';
Title.Caption := 'Number of Sales';
Index := 5;
end;
//Hook up the OnGetText event handler
SalesTable.FieldByName('NumberOfSales').OnGetText :=
FDQueryt1GetNumberSales;
//Disable the menu to that creates the new AggregateField
Create1.Enabled := False;
finally
//Enable controls and re-open the FDQuery
SalesTable.EnableControls;
SalesTable.Open;
//Hook up the CUSTNO OnGetText event handler
SalesTable.FieldByName('CUST_NO').OnGetText :=
SalesTableCustNoGetText;
end;
end;
As you can see from this code, the event handler begins by calling
DisableControls on the FDQuery, which will prevent its subsequent closing
from causing a flicker on the DBGrid. Next, the AggregateField is created and
configured. Then, a new Column is added to the DBGrid for the display of this
new virtual field. The new Column is put in position six of the DBGrid, which
will cause it to be displayed to the right of the CustomerTotal field.
Next, the OnGetText event handler is set to the SalesTableGetNumberOfSales
method. Like the SalesTableCustomerTotalGetText method used by the
Chapter 10: Virtual Fields 279
CustomerTotal field, this event handler displays the count of orders in the last
record of the group. Unlike the SalesTableCustomerTotalGetText event handler,
however, it does not format the number as a currency value.
The following is the code associated with the SalesTableGetNumberSales
method:
The menu item used to create this AggregateField is then disabled (since trying
to create another persistent field using the same name would raise an exception).
Finally, the FDQuery is re-associated to its DataSource by calling
EnableControls, and the FDQuery is re-opened.
If you run this project, and select Create | New AggregateField, the main form
will look something like that shown in Figure 10-7.
Figure 10-7: An AggregateField has been added to the FDQuery and the
DBGrid at runtime
280 Delphi in Depth: FireDAC
Calculated Fields
Calculated fields are virtual fields that display data produced by a calculation.
That calculation is performed by code you attach to a dataset’s OnCalcFields
event handler.
Closely related is another virtual field, called InternalCalc. InternalCalc fields
are particularly interesting because they actually store the values of the
calculations, which a regular calculated field does not. Since the values of the
InternalCalc calculation are stored, it is possible to create indexes that use those
values. Indexes were discussed in Chapter 7, Creating Indexes.
In almost every other way, InternalCalc and Calculated fields are identical.
InternalCalc fields just happen to be index-able, though the internal persistence
does impose a slight performance overhead. If you need to create an index on a
calculated field, use an InternalCalc field. Otherwise, use a Calculated field.
Note: Not only is the data associated with InternalCalc fields stored in memory,
if you persist the contents of a FireDAC dataset that includes one or more
InternalCalc fields, that data is also persisted. Persisting and restoring data
associated with FireDAC datasets is discussed in Chapter 11, Persisting Data.
Both Calculated fields and InternalCalc fields are special in that you cannot
assign data to them directly, either through data-aware controls or
programmatically, with one exception discussed later. You can assign data to a
Calculated or InternalCalc field only when the dataset to which the calculated
field is attached is in the dsCalc state. And, datasets are in the dsCalc state only
when they are executing their OnCalcFields event handlers.
Here is how it works. When a dataset needs data from a Calculated or an
InternalCalc field, it executes its OnCalcFields event handler if one has been
assigned. From that event handler, references to the dataset are to the current
record only, and it is from this event handler that you can assign data to the
Calculated and InternalCalc fields of the current record. If a dataset is being
displayed in a grid like a DBGrid, the OnCalcFields event handler will trigger
many times over a very short period of time, once for each record that needs to
be displayed. As a result, it is important that your calculations be as efficient as
possible. Otherwise, the overhead incurred by your calculations can have a
negative effect on performance.
Chapter 10: Virtual Fields 281
Figure 10-9 shows the running FDCalcFields main form. The Total column in
the displayed DBGrid contains the results of the calculation performed in the
preceding OnCalcFields event handler.
Note: Due to a bug in FireDAC at the time of this writing, the Total column will
not be entirely sorted if you click its header, even though doing so will attempt
to sort on an InternalCalc field. The bug is that FireDAC will not execute
OnCalcFields for every record unless we explicitly navigate to every record,
which will cause the OnCalcFields event handler to execute for every record.
Only then will the sort operation properly sort based on the InternalCalc field
whose value is defined from an OnCalcFields event handler execution.
Ok, I know what you are thinking. I could have performed that calculation in the
SQL executed by the query. Well, that’s absolutely true, and in many cases, that
would be a far more efficient way of doing things, but this is a demonstration of
Calculated fields, so that’s the route I took.
Note: Do not attempt to navigate the dataset to which your OnCalcFields event
handler is attached from within this event handler. You should use this event
handler only to assign values to Calculated and InternalCalc fields of the
current record.
Figure 10-10 shows the now completed main form of the FDCalcFields project,
where the right-most field is an InternalCalc field produced by an expression
defined at design time. In this case, the NewTotal column has been clicked once,
and the FDQuery has been properly sorted by the NewTotal values.
For a detailed discussion of the FireDAC scalar functions see Chapter 14, The
SQL Command Preprocessor.
Lookup Fields
A Lookup field is one that displays data from another, related table. For
example, the Orders table in dbdemos.gdb database contains an employee
number field, but not an employee name field. If you want to display the
employee name in the Orders table, you can do so automatically using a Lookup
field, without resorting to a join in your SQL query or additional code. Lookup
fields permit this to be done entirely through properties, so long as your data has
the correct relationships (in a relational database sense).
Several conditions must be met if you want to use a Lookup field. First, the
table to which you want to add a Lookup field must include a foreign key. A
foreign key is a field (or set of fields) that map to the primary key of another
dataset. In the case of the Orders and Employee tables of the dbdemos.gdb
database, this condition is met. Specifically, the EmpNo field in the Orders table
is a foreign key, corresponding to the EmpNo field (the primary key) in the
Employee table. In addition, you must have a dataset for the lookup table, a
table that the Lookup field is configured to use for lookup, and in this example,
that is the Employee table.
To be honest, Lookup fields are not used as extensively as they were in the early
days of Delphi. Maybe this is because a number of third-party vendors ship
components that do much of which lookup fields do but with more bells and
whistles than Delphi. Or, maybe it’s just because Delphi developers have
forgotten about them. Maybe once discovered again, Lookup fields will regain
their popularity. Whatever the reason, they are worth another look.
In this section, I am going to refer to the FDLookupField project, and point out
how the Lookup field was configured. I’ll leave it to you to decide if it’s
something you might be interested in using.
To begin with, this project contains two queries, FDQuery1 and FDQuery2. The
principle query selects records from the Orders table of the dbdemos.gdb
286 Delphi in Depth: FireDAC
database. The second table, which I refer to as the lookup table, selects data
from the Employee table of this same database.
To begin with, FDQuery2, the lookup table, includes the following select query,
which selects EMPNO, FIRSTNAME, and LASTNAME fields, as well as a
field that concatenates the FIRSTNAME and LASTNAME fields, separated by
a single space, from the Employee table:
Next, FDQuery2 has an index, which orders the records by LASTNAME and
then FIRSTNAME. This index is shown here in the Object Inspector. Finally,
FDQuery2 is set to use this index.
Next, FDQuery1 includes the following SQL statement, which selects all
records from the Orders table, including the employee number (EMPNO):
Let’s now turn our attention to defining the Lookup field. First, display the
Fields Editor for FDQuery1 and add all fields by right-clicking and selecting
Add All Fields from the displayed context menu, or by pressing Ctrl-F. We are
now ready to add the Lookup field.
Chapter 10: Virtual Fields 287
Add a new field by right-clicking in the Fields Editor and selecting New Field,
or by pressing Ctrl-N. Delphi responds by displaying the New Field dialog box.
With the Lookup radio button selected in the Field Type panel, set Name to
EmployeeName, Type to string and Size to 30.
Because we selected the Lookup radio button, the Lookup definition fields
become enabled. Set Key Fields to the field or fields that constitute the foreign
key of the Orders table, which is the EMPNO field in this case.
Next, set DataSet to the lookup table, which is FDQuery2. In Lookup Keys,
enter the name or names of the primary key fields from the lookup table. These
primary key don’t have to have the same names as the fields you enter in the
Key Fields field, but you must have the same number of fields, and they must
correspond in type to the fields in the Key Fields field. Since the EMPNO field
in the Employee table is the primary key, set the Lookup Keys field to EMPNO.
In short, we are telling Delphi that the EMPNO field of FDQuery1 is associated
with the EMPNO field of FDQuery2.
Finally, we set Result Field to the data we want to see when we are performing
our lookup. In short, we will be displaying the employee full name to select, but
it is the EMPNO field value that we are setting. Sound confusing? Well, once
you see it in action you’ll get it.
When the Lookup field has been configured, the New Field dialog box will look
like that shown in Figure 10-11.
288 Delphi in Depth: FireDAC
Figure 10-11: The New Field dialog box with a Lookup field configuration
Once we click OK, we can change the position of this new field in FDQuery1’s
Field Editor, in order to define where in the DBGrid this field should be shown.
In the following figure, EmployeeName has been moved to a position before (to
the left of) the EMPNO field. Traditionally, we would actually remove the
EMPNO field, since it’s just a number, and we would let the user select the
EMPNO by selecting the employee’s full name. However we’re keeping it here
so you can see the Lookup field in action.
Chapter 10: Virtual Fields 289
Figure 10-13: The Lookup field enables a dropdown menu in the DBGrid,
displaying the full names associated with EMPNO field values
Figure 10-14: Selecting the lookup field value updates the lookup key
field(s) in FDQuery1
Understanding FieldOptions
As I described at the outset of this chapter, until XE6, the creation of at least one
persistent field prevented the creation of dynamic fields, with the one exception
that I've noted. Specifically, you can create one or more Aggregate fields, and so
long as no other type of persistent field exists for the dataset, dynamic fields are
automatically created at runtime for each field in the dataset’s result set.
Chapter 10: Virtual Fields 293
This special nature of Aggregate fields can be seen in the Fields Editor. Here
you can see the Fields Editor for an FDQuery that includes one Aggregate field.
Notice that the Aggregate persistent field appears in a special pane at the bottom
of the Fields Editor.
Note: I get a little technical in this section when discussing persistent fields
defined using Fields or FieldDefs. I talk about these techniques in more detail in
Chapter 12, Understanding FDMemTables. If you want more information on
Fields and FieldDefs, you might want to quickly browse through that chapter to
familiarize yourself with the associated issues.
TFieldOptions = class(TPersistent)
private
FDataSet: TDataSet;
FAutoCreateMode: TFieldsAutoCreationMode;
FPositionMode: TFieldsPositionMode;
FUpdatePersistent: Boolean;
procedure SetAutoCreateMode(const Value:
FieldsAutoCreationMode);
protected
function GetOwner: TPersistent; override;
public
constructor Create(DataSet: TDataSet);
procedure Assign(Source: TPersistent); override;
published
property AutoCreateMode: TFieldsAutoCreationMode
read AutoCreateMode
write SetAutoCreateMode default acExclusive;
property PositionMode: TFieldsPositionMode
read FPositionMode
write FPositionMode default poLast;
[Default(False)]
property UpdatePersistent: Boolean
read FUpdatePersistent
write UpdatePersistent default False;
end;
TFieldsAutoCreationMode =
(acExclusive, acCombineComputed, acCombineAlways);
have created FieldDef values that are different from the configured persistent
field.
Under these conditions, you can control how the persistent field should initialize
the resulting FieldDef property of the field at runtime. When you set
UpdatePersistent to True, the Size, Precision, and Required properties of the
resulting FieldDef will be based on metadata. When set to False, the configured
properties of the persistent field take precedence.
Field and Fields Properties
Now that fields in a single dataset can be a mixture of persistent and dynamic
fields, it is natural that you might want to determine at runtime how a particular
field was created. At the individual TField level, you can use the
TField.LifeCycle property to make this determination. LifeCycle is lcAutomatic
if the field is dynamic, and lcPersistent if it is the result of a persistent field
definition.
You can ask a similar question about the collection of TFields in a FireDAC
dataset. Use the dataset’s Fields property to examine the LifeCycles property.
TDataSet.Fields.LifeCycles will return a set of TFieldLifeCycle flags. If this set
includes both the lcAutomatic and lcPersistent flags, at least one dynamic and
one persistent field can be found in the dataset. If LifeCycles contains a single
flag, all fields of that dataset are of the returned type.
Chapter 11
Persisting Data
All FireDAC datasets, as well as the FDSchemaAdapter, can save their data to a
file or stream. At a later time, that data can be loaded into a compatible
component, thereby restoring some or all of the state of the original object. For
example, a FireDAC FDQuery can be edited and then saved to a file. At a later
time, that data can be loaded into another FDQuery and edited again. If that
FDQuery was in a cached updates mode at the time that the data was saved, any
changes in the change cache will be present in the FDQuery that subsequently
loads that data (assuming that you permit the change cache to be saved, which is
the default). This feature permits you to maintain an editing session over an
extended period of time after which the cached updates can be applied in a
single call to ApplyUpdates, even though the application may have been closed
and re-opened many times between the first edit and the call to ApplyUpdates.
This capability, being able to maintain a change cache across many application
sessions, supports a feature often referred to as the briefcase model of data
access. In the briefcase model, data is obtained from a database server and
stored locally. From that point forward, the data is loaded from a file, and saved
back to that file. At some later time, the application reconnects to the database
after which ApplyUpdates is used to write those changes to the original tables.
Prior to the introduction of FireDAC into RAD Studio, the ClientDataSet was
the only component in Delphi that provided you with this capability. (Cached
updates is discussed in detail in Chapter 16, Using Cached Updates.)
Being able to persist and restore data permits you to implement features that go
well beyond simply supporting the briefcase model. For example, the contents
of an FDQuery can be streamed across the Internet, from one endpoint to
another, providing you with a convenient means of sharing data between
applications.
Here’s another example. Imagine that you have a complex accounting report
that takes significant time to generate because it must first perform a large
number of calculations on a closed accounting period. Since those calculations
will not change, given that the accounting period is closed, the printing of
additional copies of that report in the future can be streamlined by saving the
298 Delphi in Depth: FireDAC
Code: The FDSaveAndLoad project can be found on the code download. See
Appendix A for more information.
SaveToFile has two optional parameters. The first parameter is the name of the
file that you are saving. If you pass a simple file name, with no path
information, the file will be saved in the directory listed in the
ResourceOptions.DefaultStoreFolder property. (FireDAC datasets inherit this
property from their FDConnection, and it cannot be overridden.) If the
300 Delphi in Depth: FireDAC
Environment variables can also be used. So long as the value of the environment
variable defines a directory, or a portion of a directory path, you can use the
following in a path definition:
$(EnVar)
where EnVar is the name of an environment variable.
As must be obvious from this discussion, you can use one or more substitution
variable segments in your filename. For example, the following statement will
write the data from FDQuery1 into a file named DATAn, where n is the next
unused integer, with the extension XML, into the directory where the
application’s executable resides:
FDQuery1.SaveToFile('$(RUN)\DATA$(NEXT).XML');
You must supply the slashes in your filename string, but you can use either
backslashes or forward slashes, and FireDAC will convert them, if necessary,
based on the operating system.
Chapter 11: Persisting Data 301
If you omit the extension of the filename, the extension defined in the
ResourceOptions.DefaultStoreExt property is used (again, inherited from the
FDConnection). If DefaultStoreExt is empty, a file with no extension is saved.
You can omit the filename parameter of SaveToFile altogether, so long as you
have assigned a value to ResourceOptions.PersistentFileName. The rules that
apply to the filename parameter are applied to PersistentFileName. It can
include a path and an extension, and if either of these are omitted, the rules
described above are applied.
An example of SaveToFile is found in the OnClick event handler of the button
labeled Save To File in the FDSaveAndLoad project. This event handler is
shown here:
FILE FORMATS
The second optional parameter is the format of the saved file. FireDAC supports
three file formats: XML (eXtensible Markup Language), JSON (JavaScript
Object Notation), and a binary format. You define the format explicitly by
302 Delphi in Depth: FireDAC
If you omit the second parameter, FireDAC will use sfAuto, which will base the
format on the file extension of the filename being written. If the extension is
XML, the XML format will be used. If the extension is JSON, the JSON format
will be used. If you use FDB, BIN, or DAT, the binary format will be used
(technically, with sfAuto, any format other than XML or JSON will use the
binary format). If you use sfXML, sfJSON, or sfBinary, the corresponding
format will be used, regardless of the file extension.
If the filename has no extension, the rules will be applied to the extension
defined in ResourceOptions.DefaultStoreExt. If DefaultStoreExt is empty, the
format specified in ResourceOptions.DefaultStoreFormat is used. When the
second parameter is sfAuto, the filename has no extension, DefaultStoreExt is
empty, and DefaultStoreFormat is sfAuto, the binary format is used.
In addition to specifying the file format to use, you must also ensure that you
supply FireDAC with the resources it needs to support that format. This can be
done by adding an instance of a corresponding FDStanStorageformatLink
component, where format is one of the following values: XML, JSON, or BIN.
The use of the FDStanStorageformatLink components is similar to the use of the
FDGUIxWaitCursor component, in that the component itself is not important.
Instead, it is the act of adding this component to the form and then saving or
compiling that results in the addition of an essential unit that provides FireDAC
with the resources it needs to support the corresponding format. With respect to
format support, these corresponding units are named
FireDAC.Stan.StorageXML, FireDAC.Stan.StorageJSON, and
FireDAC.Stan.StorageBin. Once these units appear in your uses clause, you may
remove the corresponding component, though I like to leave them in place as a
reminder to me or a future developer, that a requirement needed to be met.
While there are similarities between FDGUIxWaitCursor and the
FDStanStorageformatLink components, there is one important difference.
Beginning with Delphi XE8, you no longer need to add the FDGUIxWaitCursor
component to your project. By comparison, even with the most recent version of
FireDAC, you must either add the FireDAC.Stan.Storageformat unit manually
or through the placement of the corresponding FDStanStorageformatLink
component.
Chapter 11: Persisting Data 303
Figure 11-3: The StorePrettyPrint property has been used to format this
XML. The data has been clipped intentionally
Chapter 11: Persisting Data 305
WHAT TO PERSIST
Under normal circumstances, you will want to persist all of the information
associated with the FireDAC dataset that you are saving. This includes the data
itself, as well as the metadata. The information is necessary in order to load this
information back into a corresponding FireDAC dataset.
In addition, if your FireDAC dataset is in cached updates mode at the time you
are saving, you will need to save that information in order to continue your
editing session when the data is once again loaded.
What information is persisted when you call SaveToFile (as well as
SaveToStream) is controlled by the ResourceOptions.StoreItems property. This
property consists of a set of one or more TFDStoreItem flags. The declaration of
the TFDStoreItem enumeration is shown here:
Similarly, you might omit the siData flag if you want to save just the structure to
be loaded at a later time. This might be done in cases where the data is being
saved in a traditional fashion, for example, to a database, but the structure was
created dynamically at runtime and your code needs to re-create an empty
FireDAC dataset with that same structure at a later time.
As mentioned previously, you can include the change cache by including the
siDelta flag in StoreItems. Consider the following query:
Figure 11-4 show a portion of the pretty printed XML that was generated when
the record for customer 1001 has had the Customer field changed from
‘Signature Design’ to ‘Signature Design, Ltd.’. In this figure, the change is seen
in the Original and Current elements of RowID="0", and can be used to restore
this revision if loaded into a FireDAC dataset.
Figure 11-5: FireDAC saves the change cache information when siMeta is
included in the StoreItems property. This image has been clipped
intentionally
You use the siVisible flag to control the persistence of visible records of the
FireDAC dataset. When the siVisible flag is omitted, all records contained in the
dataset are persisted.
308 Delphi in Depth: FireDAC
In most situations, this flag doesn’t make any difference, since you want to
persist all records contained in the dataset. On the other hand, if your dataset is
in the cached updates mode and you have changed the default value of
FilterChanges, siVisible can have a significant impact on what gets persisted.
Here is an example of a situation where siVisible effects what gets saved.
Imagine an FDQuery that is in the cached updates mode, and you have deleted
from it some records, and you have modified the FilterChanges property to
include only the rtDeleted flag. The result is that only deleted records will be
visible (accessible programmatically or shown in a grid) in the dataset, even
though other unmodified, inserted, and modified records may be present. In this
situation, including the siVisible flag in StoreItems will result in only the
deleted records being persisted. By comparison, if omitted, all records, including
deleted, will be persisted.
C_FD_StorageVer = 15;
Note: If the saved file included cached updates, and StoreItems includes the
siData flag, the loaded FireDAC dataset’s UpdatesPending property will be
True. However, loading cached updates using LoadFromFile (or
LoadFromStream) will not change the CachedUpdates property of that dataset
from False to True. If you load a FireDAC dataset from a file that may include
cached updates, and siDelta is in StoreItems, you should test the
UpdatesPending property following the call to LoadFromFile, and set
CachedUpdates to True if UpdatesPending is True. Otherwise, you will not be
able to edit the change cache or apply those updates.
relative path, an absolute path, as well as use variable substitution using the
format $(name), where name can either be one of the predefined variables listed
in Table 11-1 or an environment variable.
Other similarities to SaveToFile include the use of
ResourceOptions.DefaultStoreFolder and ResourceOptions.DefaultStoreExt
when the path or file extension parts of the filename are omitted, as well as the
use of the ResourceOptions.PersistentFileName property when the filename is
omitted entirely.
LoadFromFile is demonstrated in the OnClick event handler of the button
labeled Load From File, shown here. Also shown is the UpdateButtons custom
method, which enables or disables buttons on the main form based on the active
state of the form’s FDQuery, as well as the value of the FDQuery’s
UpdatesPending property. (The UpdateButtons method is also called from the
DataSource’s OnUpdateData property, in order to keep the buttons properly
synchronized with the state of the FDQuery on the form.)
procedure TForm1.btnLoadFromFileClick(Sender: TObject);
var
FileExt: string;
i: Integer;
begin
if OpenDialog1.Execute then
begin
FileExt := ExtractFileExt(OpenDialog1.FileName).ToUpper;
if FileExt = 'XML' then
FDQuery1.LoadFromFile( OpenDialog1.FileName,
TFDStorageFormat.sfXML )
else
if FileExt = 'JSON' then
FDQuery1.LoadFromFile( OpenDialog1.FileName,
TFDStorageFormat.sfJSON )
else
if FileExt = 'BIN' then
FDQuery1.LoadFromFile( OpenDialog1.FileName,
TFDStorageFormat.sfBinary )
else //everything else
FDQuery1.LoadFromFile( OpenDialog1.FileName,
TFDStorageFormat.sfAuto )
end;
if not FDQuery1.CachedUpdates and FDQuery1.UpdatesPending then
FDQuery1.CachedUpdates := True;
UpdateButtons( FDQuery1.Active );
Chapter 11: Persisting Data 311
end;
Figure 11-6 shows how the main form looks when the FourFieldsSample.xml
file is loaded. A clipped view of this file was shown in Figure 14-5, and it
contained one update in the change cache. The UpdateButtons method, having
detected updates pending, has configured the form to enable working with the
cache.
FILE FORMATS
There are also similarities between LoadFromFile and SaveToFile when it
comes to the file format. If you have supplied the filename parameter, you can
also specify the format of the file being loaded by providing an
FDStorageFormat value in the second parameter. Importantly, if you provide a
value other than sfAuto (or if ResourceOptions.DefaultStoreFormat in the
FDConnection is set to a value other than sfAuto), the file format that is
specified must match that associated with the format written when SaveToFile
was called. If there is a mismatch, the call to LoadToFile will raise an exception.
Likewise, if you omit the parameter (in which case sfAuto is used), or you
specify sfAuto explicitly, or the FDConnection’s
ResourceOptions.DefaultStoreFormat is set to sfAuto, the extension of the file
being loaded must match the default extension for the associated format, and
that too must correspond to the format used when SaveToFile was called. In
other words, LoadFromFile will only work when the format of the file matches
the format that LoadFromFile is expecting.
One final word concerning LoadFromFile. FireDAC must have access to the
resources associated with reading the format you are loading. As described
earlier in this chapter, you can achieve this by placing the corresponding
FDStanStorageformatLink components in your project or by manually adding
the necessary FireDAC.Stan.Storageformat units to your uses clause.
MERGING DATA WHEN LOADING
In most situations, you call LoadFromFile (or LoadFromStream) to activate a
FireDAC dataset using the metadata that was previously saved, and load it with
the saved data. You might even be loading a change log. As mentioned at the
outset of this chapter, this is a capability that previously was only available in
Delphi through the ClientDataSet.
But FireDAC goes a step further. FireDAC can call LoadFromFile (or
LoadFromStream) on an active FireDAC dataset, which, so long as the structure
of the active dataset is consistent with that of the data being loaded, will merge
the data from the file being loaded with that already in the dataset.
ClientDataSets cannot do this.
Being able to merge previously saved data into an active FireDAC dataset is a
significant feature that gives you added flexibility. FireDAC provides for this
merging in two ways. You can merge the data, and you can merge the metadata.
This is controlled through the ResourceOptions.StoreMergeData and
Chapter 11: Persisting Data 313
TFDMergeDataMode = (dmNone,
dmDataSet, dmDataAppend, dmDataMerge,
dmDeltaSet, dmDeltaAppend, dmDeltaMerge);
The FDSaveAndLoad project contains code that permits you to test the effects
of various StoreMergeData and StoreMergeMeta settings. Two comboboxes on
the main form are initialized when the form is opened to the StoreMergeData
and StoreMergeMeta settings of the FDQuery, as shown in this code segment:
cbxStoreMergeData.ItemIndex :=
Ord( FDQuery1.ResourceOptions.StoreMergeData );
cbxStoreMergeMeta.ItemIndex :=
Ord( FDQuery1.ResourceOptions.StoreMergeMeta );
Figure 11-7 depicts the FDSaveAndLoad main form after two previously saved
files were loaded. The first file, FourFieldsSample.XML, which is shown in
Figure 11-5, was loaded with StoreMergeData set to dmNone and
StoreMergeMeta set to mmNone. The second file, SampleData.xml, was loaded
with StoreMergeData set to dmDataAppend and StoreMergeMeta set to
mmNone. A clipped version of this second file is shown in Figure 11-3.
Figure 11-7: Use the Store Merge Data and Store Merge Meta comboboxes
to test the effects of merging one or more previously saved files with the
contents of a FireDAC dataset
Automating Persistence
As you learned earlier, you can omit the name of the file to which you want to
save data or from which you want to load data by providing a file name in the
ResourceOptions.PersistentFileName property. Furthermore, you can control the
directory in which this file is stored using the
ResourceOptions.DefaultStoreFolder property, and the file extension using the
ResourceOptions.DefaultStoreExt property. These two properties are associated
with the FDConnection and FDManager to which the FireDAC dataset is
associated.
Chapter 11: Persisting Data 317
Maintaining Backups
When you are saving a FireDAC dataset to a file, FireDAC can maintain a
backup file of the previous version of your data. This is especially useful if you
want to treat one or more persisted FireDAC datasets as a local database, which
can be very nice if you need to implement a briefcase model of data access.
When you set the ResourceOptions.Backup property of a FireDAC dataset to
True, FireDAC will create a backup file of your dataset when you save changes,
whether you call SaveToFile or have FireDAC automatically save your changes.
Furthermore, you use the ResourceOptions.BackupFolder and
ResourceOptions.BackupExt properties of your FireDAC’s connection to define
a folder in which the backup file should be placed, and the extension to use for
the file. The default backup folder is the folder in which you save your data, and
the default extension is .bak.
FireDAC saves the backup file using the name of the data file plus the backup
extension. FireDAC creates the backup file using the same format as the data
file. As a result, if you save a data file using the binary format, the backup file
will be in the binary format.
The project FDAutoLoadBackup demonstrates both automatic persistence as
well as FireDAC’s backup capability. Figure 11-8 shows the main form of this
project in Delphi’s form designer.
318 Delphi in Depth: FireDAC
The OnCreate event handler of this project is used to ensure that the backup
folder exists, and that the FDMemTable on the form is configured to auto load
and create backups. In addition, if the file to be loaded does not already exist, it
is created. This OnCreate event handler is shown here:
FDMemTable1.ResourceOptions.PersistentFileName ) then
begin
with FDMemTable1.FieldDefs do
begin
Clear;
Add('ClientID', ftInteger);
Add('ClientName', ftString, 45 );
Add('Address1', ftString, 50 );
Add('Address2', ftString, 50 );
Add('LastContact', ftDateTime );
Add('Notes', ftMemo );
Add('Active',ftBoolean);
end; //with FDMemTable.FieldDefs
FDMemTable1.CreateDataSet;
end
else
FDMemTable1.Open;
end;
Note: FDMemTables, like the one used in this example, do not use
FDConnections. As a result, the singleton FDManager was used to configure
the TFDTopResourceOptions settings, such as DefaultStoreFolder,
BackupFolder, and BackupExt. Also, since I have used a substitution variable
with the format $(name) in the BackupFolder property, I used the FDExpandStr
function from the FireDAC.Stan.Util unit to expand the substitution variable to
a fully qualified directory path.
As you may recall from Chapter 3, Configuring FireDAC, the FDManager is the
base class for all configuration inheritance
The automatic saving of the data occurs when the FDMemTable is closed. This
closing is performed from the OnClose event handler of the form. This event
handler, shown here, also ensures that the current record has been posted before
the dataset is closed.
procedure TForm1.FormClose(Sender: TObject; var Action:
TCloseAction);
begin
if FDMemTable1.State in dsEditModes then
FDMemTable1.Post;
FDMemTable1.Close;
end;
320 Delphi in Depth: FireDAC
This project also includes a button on the main form labeled Show Backup,
which you can use to view the contents of the backup that was created the last
time the FDMemTable’s contents were saved. This event handler is shown here.
Figure 11-9 shows how the backup looks after some changes were posted.
Figure 11-9: The Clients table has been created and edited. The contents of
the previous backup are shown in the top-level form
Note: Recall that you can use the $(NEXT) substitution variable in path
definitions, permitting you to create a different backup each time a file is
persisted.
Persisting to Streams
Persisting a FireDAC dataset to a stream is very similar to persisting to a file,
with just a few differences. These include no default folder option, no automatic
load on open and save on close, and no backup option. Otherwise, the options
are the same, such as having a choice of persistence format, storage options
(data, metadata, and delta), and version.
Persisting a FireDAC dataset to a stream, and restoring that dataset from a
stream, is demonstrated in the FDSaveAndLoad project. In this project, the
stream is used to write the contents of the FireDAC dataset to a BLOB field of
an InterBase table. Loading from a stream is demonstrated by retrieving the
previously saved stream into a FireDAC dataset.
322 Delphi in Depth: FireDAC
The table that this code writes to and reads from is created in the OnCreate
event handler of the main form of the FDSaveAndLoad project if it does not
already exist. The relevant code is shown here:
Saving to a Stream
You save to a Stream by calling the FireDAC dataset’s SaveToStream method.
This method has the following signature:
Chapter 11: Persisting Data 323
The first parameter is a required parameter, and it is the stream to which you are
writing the data. The second parameter is an optional parameter. If you omit it,
FireDAC will use its binary format for the stream. If you want to persist your
data to a text-based stream, set AFormat to either sfXML or sfJSON.
Writing a FireDAC dataset to a stream is demonstrated in the OnClick event
handler for the button labeled Save To Stream. This event handler is shown
here:
ms.Free;
end;
end;
end;
The essential part of this code is found immediately following the call to the
TMemoryStream constructor. After creating the memory stream, the data of the
FDQuery is written to it, after which, the Position property of the memory
stream is reset to 0. What happens next depends on what you need to do with the
stream. In this case, I am writing it to a BLOB field in the StoredDataSets table.
However, I could just as easily have opened a socket connect to some service
and sent that stream over the Internet to a receiving endpoint.
When the SaveToStream call is made, the StoreItems, Version, and
StorePrettyPrint properties are used to determine what and how to write to the
stream.
Loading from a Stream
You load the previously persisted contents of a FireDAC dataset from a stream
by calling the LoadFromStream method. Here is the signature of this method:
LoadFromStream has one required parameter and one optional parameter. The
required parameter is the stream containing the previously persisted data, and
the optional parameter permits you to declare the format used to persist the
stream. If you omit the second parameter, FireDAC will attempt to read the
stream using the binary format. If you know that the stream was created using
either XML or JSON formats, you must specify which format was used during
creation. FireDAC raises an exception if there is a mismatch between the format
used during creation and the one FireDAC expects when loading the stream.
The use of LoadFromStream is demonstrated in the OnClick event handler of
the button labeled Load From Stream. This event handler is shown here:
This event handler is a bit more involved, in that it must first ask the user to
select which of the previously saved datasets they want to load. After querying
the StoredDataSets table for its records, those records are displayed in the
ChooseDataSetForm dialog box shown in Figure 11-10. If the user clicks the
OK button on the dialog box, the BLOB field from the current record of the
query is used to write the previously saved data to a stream. The Position
property of this stream is then reset to 0 after which, the stream is loaded into
FDQuery1.
326 Delphi in Depth: FireDAC
As with the LoadFromFile method, how the contents of the stream get loaded
depends on the values you provide in the ResourceOptions.StoreMergeData and
ResourceOptions.StoreMergeMeta properties. For example, if StoreMergeData
is set to mdDataAppend, the contents of the stream will be added to the records
already present in the FireDAC dataset.
log identify the changes that have been made to the participating FireDAC
datasets, it also maintains the order in which those changes will be applied.
Like FireDAC datasets, schema adapters can also be persisted. In fact, if you are
using the centralized model of cached updates, and need to persist the state of
your datasets, you do so using the schema adapter.
For example, if you have two FireDAC datasets pointing to the same schema
adapter through their SchemaAdapter properties, and the datasets are in cached
updates mode, you persist this information using the SaveToFile or
SaveToStream methods of the schema adapter. Specifically, it is not necessary
to call the SaveToFile or SaveToStream methods of the individual FireDAC
datasets.
Furthermore, you restore the persisted state by calling the LoadFromFile or
LoadFromStream methods of the schema adapter. Doing so will restore the
persisted state of the associated datasets. For example, if the schema adapter and
the two FireDAC datasets are closed, and you call the LoadFromFile method of
the schema adapter, the schema adapter as well as the two FireDAC datasets
will become active. If the persisted data includes the change cache, that
information will be restored as well.
All of this assumes that the properties of the schema adapter were configured to
persist sufficient delta and metadata information using the StoreItems property,
and that the StoreMergeData and StoreMergeMeta properties were also properly
configured. In addition, it is also assumed that FireDAC datasets with the
appropriate component names and configurations are present at the time the
persisted data is loaded.
For example, if two FDQueries named CustomerQueryRB and SalesQueryRB
are connected to the schema adapter at the time when the schema adapter’s
SaveToFile method is called, there must two FDQueries with these names
connected to the schema adapter at the time that LoadFromFile is invoked, and
these two FDQueries must be configured with SQL statements consistent with
those used by the FDQueries that were present when the schema adapter was
persisted.
Chapter 12
Understanding
FDMemTables
FireDAC’s FDMemTable is an in-memory dataset that implements the
TDataSet interface. FDMemTable is Delphi’s second in-memory dataset, a
relative newcomer compared with ClientDataSet, which made its originally
debut in Delphi 3 Client/Server.
In-memory datasets are similar to other datasets with one important distinction
— an in-memory dataset maintains all of its content in memory. There are two
consequences to this feature.
First, operations, such as locating records, sorting, setting ranges, filtering, and
the like, are very fast, which make them ideal for handling largely read-only
data in a cached environment.
Second, there is a limit to the amount of data that an in-memory dataset can
hold, though the upper limit is typically so high that this is rarely a concern. For
example, I've had in-memory datasets that hold many thousands of records, and
in some applications, many millions of records. It really depends on the amount
of space consumed by individual records.
That Delphi supports two different in-memory datasets begs the question,
“Which one should you use?” The answer is, “The one that best suits your
needs.” FDMemTable and ClientDataSet were designed for different purposes,
and as a result, each have their own strengths and weaknesses. I find that I use
both FDMemTables and ClientDataSets in my applications, where appropriate.
330 Delphi in Depth: FireDAC
Note: You can find a YouTube video from a presentation I made at CodeRage 9
that explains some of these differences. I’ve learned a lot more about FireDAC
datasets since I made that recording, so it includes some inaccuracies.
Nonetheless, it is mostly correct. You can find this video by searching YouTube
for FDMemTables and ClientDataSets Compared.
datasets, and these are two features that are hugely powerful when applied
correctly.
In addition, the cached nature of in-memory datasets alone is a game-changing
capability. And Delphi 10.2 Tokyo introduced design-time editing of
FDMemTable data, which is interesting, and not currently available in other
FireDAC datasets.
What I am trying to say is that FDMemTables can play an important role in your
applications, though this role is more focused than that of ClientDataSets. If you
need cached updates, temporary indexes, aggregate fields, and dataset
persistence, you will use these capabilities as they are exposed from other
FireDAC datasets. On the other hand, if you want the blinding performance of
in-memory data, cloned cursors, and nested datasets, FDMemTable is there to
help.
populate this table when your application loads so that this information can be
searched quickly while the application is running without having to repeatedly
read the contents of the directory. This can be accomplished by creating an
FDMemTable that has fields in which this information can be held. Doing this,
however, requires that you know how to define fields for your FDMemTable,
since there is no database structure from which these fields can be derived.
There are two mechanisms by which you can define the structure of an
FDMemTable — FieldDefs and Fields.
Defining Structure Using FieldDefs
The FieldDefs property of an FDMemTable is a collection of TFieldDef
instances. Each FieldDef represents a column, or field of the FDMemTable if
the FDMemTable is activated.
You can configure FieldDefs either at design time or at runtime. To define the
structure of an FDMemTable at design time, you use the FieldDefs collection
editor to create individual FieldDefs. You then use the Object Inspector to
configure each FieldDef, defining the field name, data type, size, or precision,
among other options.
At runtime, you define your FieldDef objects by calling the FieldDef’s
AddFieldDef or Add methods.
This section begins by showing you how to create your FDMemTables's
structure at design time. Defining the structure at runtime is shown later in this
chapter.
CREATING FIELDDEFS AT DESIGN TIME
You create FieldDefs at design time using the FieldDefs collection editor. To
display this collection editor, select the FieldDefs property of an FDMemTable
in the Object Inspector and click the displayed ellipsis button. The FieldDefs
collection editor is shown in the following illustration:
Chapter 12: Understanding FDMemTables 333
Using the FieldDefs collection editor, click the Add New button (or press Ins)
once for each field that you want to include in your FDMemTable. Each click of
the Add New button (or press of Ins) will create a new FieldDef instance, which
will be displayed in the collection editor. For example, if you add four new
FieldDef instances to the FieldDefs collection editor, it will look something like
that shown here:
You must configure each FieldDef that is added to the FieldDefs collection
editor before you activate the FDMemTable. To configure a FieldDef, select the
FieldDef you want to configure in the collection editor or the Structure Pane,
and then use the Object Inspector to set its properties. Figure 12-1 shows how
the Object Inspector looks when a FieldDef is selected. (Notice that the
Attributes property has been expanded to display its subproperties.)
334 Delphi in Depth: FireDAC
At a minimum, you must set the DataType property of each FieldDef. You will
also want to set the Name property. The Name property defines the name of the
corresponding Field that will be created.
Other properties you will often set include the Size property, which you define
for String, BCD (binary coded decimal), byte, and VarByte fields, and the
precision property for BCD fields. Similarly, if a particular field requires a value
before the record to which it is associated can be posted, set the faRequired flag
in the Attributes set property.
Figure 12-2 shows both the FieldDefs collection editor (with the first field
selected), as well as the Object Inspector. Names, data types, and sizes have
now been defined for each of the four fields defined for this FDMemTable.
Chapter 12: Understanding FDMemTables 335
After setting the necessary properties of each FieldDef, you must create the
FDMemTables's data store before you can use it. You can do this at either
design time or runtime.
To create the FDMemTable's data store at design time, select the FDMemTable
and use the Object Inspector to set its Active property to True. Creating the
FDMemTable data store at design time makes the FDMemTable active. This
data store is necessarily empty, as shown in Figure 12-3.
There are several advantages to creating the FDMemTable data store at design
time. The first is that the active FDMemTable has Fields (dynamic fields, to be
precise), and these Fields can easily be hooked up to data-aware controls such as
DBEdit, DBLabel, and DBImage, or to a BindSourceDB (if you are using
LiveBindings).
Figure 12-3: The FieldDefs define the fields of this active FDMemTable
336 Delphi in Depth: FireDAC
Code: You will find the FDFieldDefsDesignTime project in the code download.
See Appendix A for more information.
The second advantage is that you can save the FDMemTable to a file. When you
save an FDMemTable to a file, you are saving its metadata as well as its data.
But in this case, there is no data. Nonetheless, the metadata is valuable, in that
any FDMemTable that subsequently loads the saved file from disk will become
active and will have the structure that you originally defined.
Saving an FDMemTable to file at design time, like any other FireDAC dataset,
involves right-clicking the dataset and selecting Save To File. Saving and
loading FireDAC datasets was discussed in greater detail in Chapter 11,
Persisting Data.
CREATING FIELDDEFS AT RUNTIME
Being able to create FieldDefs at design time is an important capability, in that
the Object Inspector provides you with assistance in defining the various
properties of each FieldDef you add. However, there may be times when you
cannot know the structure of the FDMemTable that you need until runtime.
There are two methods that you can use to define the FieldDefs property at
runtime. The easiest technique is to use the Add method of the TFieldDefs class.
The following is the syntax of Add:
This method has two required parameters and two optional parameters. The first
parameter is the name of the field and the second is its type. If you need to set
the Size property, as is the case with fields of type ftString and ftBCD, set the
Size property to the size of the field. For required fields, set the fourth property
to a Boolean True.
The following code sample creates an in-memory table with five fields:
const
DataFile = 'mydata.xml';
FDMemTable1.LoadFromFile( DataFile )
else
begin
with FDMemTable1.FieldDefs do
begin
Clear;
Add('FirstName',ftString, 20);
Add('LastName',ftString, 25);
Add('DateOfBirth',ftDate);
Add('Active',ftBoolean);
end; //with FDMemTable1.FieldDefs
FDMemTable1.CreateDataSet;
end;
end;
This code begins by defining the name of the data file, and then tests whether or
not it already exists. When it does not exist, the Add method of the FieldDefs
property is used to define the structure, after which, the in-memory dataset is
created using the CreateDataSet method. Setting the Active property of the
FDMemTable to True would produce the same result. CreateDataSet was
introduced in FDMemTable to provide compatibility with ClientDataSets.
If you consider how the Object Inspector looks when an individual FieldDef is
selected in the FieldDefs collection editor, you will notice that the Add method
is rather limited. Specifically, you cannot create hidden fields, readonly fields,
or BCD fields where you define precision, using the Add method. For these
more complicated types of FieldDef definitions, you can use the AddFieldDef
method of the FieldDefs property. The following is the syntax of AddFieldDef:
As you can see from this syntax, this method returns a TFieldDef instance. Set
the properties of this instance to configure the FieldDef. The following code
sample shows how to do this:
const
DataFile = 'mydata.xml';
else
begin
with FDMemTable1.FieldDefs do
begin
Clear;
with AddFieldDef do
begin
Name := 'First Name';
DataType := ftString;
Size := 20;
end; //with AddFieldDef do
with AddFieldDef do
begin
Name := 'Last Name';
DataType := ftString;
Size := 25;
end; //with AddFieldDef do
with AddFieldDef do
begin
Name := 'Date of Birth';
DataType := ftDate;
end; //with AddFieldDef do
with AddFieldDef do
begin
Name := 'Active';
DataType := ftBoolean;
end; //with AddFieldDef do
end; //with FDMemTable1.FieldDefs
FDMemTable1.CreateDataSet;
end;
end;
Code: You can find the FDFieldDefsRuntime project in the code download.
To create a field, right-click an FDMemTable and select Fields Editor from the
displayed context menu. Alternatively, double-click the FDMemTable:
Right-click the Fields Editor and select New Field (or press the Ins key or Ctrl-
N). The New Field dialog box is displayed, as shown in the Figure 12-5.
Chapter 12: Understanding FDMemTables 341
Figure 12-5: You use the New Field dialog box to define a persistent Field
To define the structure of your FDMemTable, you use Data fields. A Data field
is one whose data is stored in the FDMemTable and can be saved to a file or to a
stream. By default, Field Type is set to Data, and when created this way, it is a
persistent field.
Note: The other field types, Lookup, Aggregate, Calculated, and InternalCalc,
are virtual fields, and are described in detail in Chapter 10, Creating and Using
Virtual Fields.
Define your field by providing a field name and field type, at a minimum. The
value you enter in the Name field defines the name of the column in the
FDMemTable. You set Type to one of the supported Delphi field types, such as
String, Integer, DateTime, and so forth.
If your field is one that requires additional information, such as size, provide
that information as well. You can also optionally set Component. By default,
Component is set to the name of the dataset concatenated with the Name value.
You can set Component manually, but if you do so, ensure that it will be a
unique component name, as this is the name that is used for the TField object
342 Delphi in Depth: FireDAC
that will be created, and components names must be unique within a given form,
frame, or data module.
When done, select OK to save your new field. That field will now appear in the
Fields Editor. The following illustration depicts the Fields Editor after four
fields have been added using the New Field dialog box.
The field that is created when you accept the New Field dialog box is one of the
descendants of TField, based on the value you set Type to on the New Field
dialog box. For example, if you set Type to string, a TStringField is created. By
comparison, if you set Type to integer, a TIntegerField is created.
You cannot change a field type once it has been created. If you have mistakenly
selected the wrong value for Type, you need to delete that field and create a new
field.
You also cannot use the New Field dialog box to make changes to a field once it
has been created. However, there is really no need to do that since the created
field is a published member of the form, data module, or frame on which your
FDMemTable appears. As a result, you can select the field in the Object
Inspector and make changes there. For example, Figure 12-6 shows the Object
Inspector for a string data field named FirstName that was created by the New
Field dialog box.
Chapter 12: Understanding FDMemTables 343
Figure 12-6: Use the Object Inspector to modify a Field that you have
created
344 Delphi in Depth: FireDAC
That fields are published members of the container on which the FDMemTable
resides results in another interesting feature. They appear in the container's class
type definition, as shown in the following type definition from the
FDFieldsDesignTime project:
type
TForm1 = class(TForm)
FDMemTable1: TFDMemTable;
DataSource1: TDataSource;
DBGrid1: TDBGrid;
DBNavigator1: TDBNavigator;
FDMemTable1FirstName: TStringField;
FDMemTable1LastName: TStringField;
FDMemTable1DateOfBirth: TStringField;
FDMemTable1Active: TBooleanField;
private
{ Private declarations }
public
{ Public declarations }
end;
Code: You can find the FDFieldsDesignTime project in the code download.
It is interesting to note that these fields are defined in this fashion even before
the FDMemTable's data store has been created.
Fields that are created prior to an FDMemTable being made active are known as
persistent fields. (When you create an FDMemTable's data store based on
FieldDefs, fields are also created, but those are known as dynamic fields, since
they are created as part of the activation process.)
Persistent fields can be created for any type of dataset, not just FireDAC
datasets, or even FDMemTables in particular. However, the role that fields play
in FDMemTables is different from that for other datasets (other than the
ClientDataSet), in that FDMemTables and ClientDataSets are the only datasets
whose physical structure can be defined using Data field type definitions.
If you want to create the data store of an FDMemTable whose structure is
defined at design time, you use the same steps as you would with an
FDMemTable defined by FieldDefs — once you have defined the fields, you
make the FDMemTable active by setting its Active property to True.
Chapter 12: Understanding FDMemTables 345
Similarly, even though you have defined the Fields at design time, you can still
defer the creation of the FDMemTable's data store until runtime. Again, just as
you do with an FDMemTable whose structure is defined by FieldDefs, you
create the FDMemTable data store at runtime by calling the FDMemTable's
CreateDataSet method or by setting its Active property to True.
Code: You can find the FDFieldsRuntime project in the code download.
Figure 12-7 shows the main form from the FDFieldsRuntime project that uses
Fields to create an FDMemTable structure at runtime.
of the FDMemTable will be derived. This technique is useful when you want to
base the structure of the FDMemTable on the definitions found in an underlying
database. Loading an FDMemTable’s structure (and data) from an
FDTableAdapter is demonstrated in Chapter 5, More Data Access.
There is a final mechanism for defining an FDMemTable’s structure, and in
most cases, loading data as well. You can load data from an existing dataset into
an FDMemTable.
There are two ways to load data from an existing dataset into an FDMemTable.
The most flexible is to use the CopyDataSet method, which permits you to copy
data from any dataset. The second is to assign the Data property of a FireDAC
dataset to the Data property of an FDMemTable. Both of these techniques are
discussed in the following sections.
Loading an FDMemTable Using CopyDataSet
The CopyDataSet method is introduced in the FDDataSet class, which means
that it is available for all FireDAC datasets, including FDTables, FDQueries,
and FDMemTables. When you call CopyDataSet, you affect a destination
FireDAC dataset, the one from which CopyDataSet is invoked, based on one or
more characteristics of a source TDataSet. These characteristics can include
structure (the fields and their data types), indexes, calculated fields, aggregate
fields, and most importantly, data.
CopyDataSet takes two parameters. The first is the source TDataSet instance.
This is the dataset whose characteristics are being copied. Importantly, this
dataset does not need to be a FireDAC dataset — it can be any TDataSet
descendant, such as SQLDataSet, ClientDataSet, or ADOTable, to name just a
few possibilities.
The second parameter is a set of flags that identify the characteristics that will
be copied from the source dataset, and how they are copied. Here is the syntax
of CopyDataSet:
coRestart, coAppend,
coEdit, coDelete, coRefresh);
Before I continue, it’s worth noting that you can call CopyDataSet from other
FireDAC datasets, including FDTables and FDQueries. However, with these
classes, the benefits are far more limited. For example, while CopyDataSet is a
great way to define an FDMemTable’s structure (its fields and their data types),
doing this makes little sense with other FireDAC datasets. For example, both an
FDTable and an FDQuery get their structure from some underlying database or
an SQL statement. If you are creating a new data structure, you are likewise
doing so in the underlying database, not in the FDQuery itself.
As for populating an FDTable, FDQuery, or FDStoredProc with data, that is
typically done in order to write data into the underlying database. And while that
data might come from sources similar to where an FDMemTable’s data can
come from (user input or explicit programmatic operations), the end purpose is
to update the database. By comparison, data loaded into an FDMemTable might
never affect an underlying database, depending on the reason for your use of the
FDMemTable.
Granted, you might find some very useful purposes for calling CopyDataSet
from an FDQuery, and this is why I will discuss this operation briefly later in
this chapter, but for now, I am going to focus on using CopyDataSet to define
the structure of, and insert data into, FDMemTables.
When used with an FDMemTable, there are two primary reasons to call
CopyDataSet. The first is to define the structure of an FDMemTable, as well as
other properties, such as indexes, from an existing Delphi dataset. When used in
this manner, you will not load data, but you can then use this structure to add
data programmatically, permit the user to add data through the user interface, or
do anything else that a structure-only in-memory table can do (such as save this
structure to a file or stream for a future use).
The second reason to call CopyDataSet, and probably the most common one, is
to load both structure and data from an existing Delphi dataset. You can then
work with this data in memory, save the data (and structure) to a file or stream,
transfer the FDMemTable to some other process, permit the user to work with
the data through the user interface, and more.
It is the second, optional parameter of the CopyDataSet method that controls
exactly what is copied. This parameter, named AOptions, consists of a set of
zero or more flags that define what is to be copied (though with zero flags,
350 Delphi in Depth: FireDAC
Code: You can find the FDCopyingDataSets project in the code download.
The FDMemTable on the left side of Figure 12-8 is the dataset whose
CopyDataSet method will be called, and possible source datasets (a
ClientDataSet and two FDQueries) appear on the right side of this figure. You
initiate a call to CopyDataSet by first selecting one of the datasets displayed in
the DataSet to Copy list box which appears in the upper left area of the form,
after which you select which flags you want to include in the AOptions
parameter from the provided checkboxes, and then clicking the button labeled
Use CopyDataSet. (At this point, ignore the Use Data group box on the right
side of this figure. I will discuss this option later in this chapter.)
Since the checkboxes, and the construction of the AOptions parameter, are
instrumental to the use of CopyDataSet, let’s take a moment to discuss it. The
AOptions parameter is built through a call to the BuildCopyDataSetOptions,
which is a custom method that appears in each of the projects referred to in this
section. BuildCopyDataSetOptions is shown here:
352 Delphi in Depth: FireDAC
Figure 12-9 shows how this main form looks after calling CopyDataSet with just
the coStructure flag in the AOptions parameter when the EmployeeQuery
FDQuery is used as the source dataset. For reference, the SQL statement
associated with EmployeeQuery is shown here:
Figure 12-9: CopyDataSet has been called, copying only the structure of a
query result
Though we’ve constructed the call to CopyDataSet at runtime with the help of
the BuildCopyDataSetOptions method, the equivalent call to CopyDataSet that
would produce the result shown in Figure 12-9 looks like this:
Figure 12-10: CopyDataSet has been called, copying the structure and data
The equivalent call to CopyDataSet that would produce the results shown in
Figure 12-10 is shown here:
If the FDDataSet whose CopyDataSet method you invoke already has structure,
you omit the coStructure flag in AOptions. In that case, only the fields from the
source dataset whose names match fields in the destination dataset are copied.
This is demonstrated in the FDCopyClientDataSet project.
Code: You can find the FDCopyClientDataSet project in the code download.
Three of these fields match fields from the source dataset, which is a
ClientDataSet that points to the customer.cds table provided for in Delphi’s
sample files. These fields, including a calculated field named UpdateStatus, are
shown here:
Figure 12-11 depicts the result when coStructure is omitted from the destination
FDMemTable. In this case, only the three fields whose names and types match
those currently defined for the destination dataset appear in the destination
dataset. By comparison, if coStructure is included in the AOptions parameter,
many more fields will appear in the result dataset (which can be seen in Figure
12-12).
356 Delphi in Depth: FireDAC
Figure 12-11: The structure is not copied, and compatible field data is
appended
flag determines whether the destination records will maintain the status of the
corresponding records in the source dataset.
For example, if a record was updated in the source dataset, meaning that its
UpdateStatus property is usModified, the destination record UpdateStatus will
also be usModified, so long as the coRefresh flag is absent from AOptions. If
coRefresh is present, the destination dataset record’s UpdateStatus state will be
usUnmodified.
If the coCalcFields flag is present in the AOptions parameter, the values of
calculated fields will be included in the CopyDataSet operation. In Figure 12-
12, both the coStructure and the coCalcFields flags are present in the AOptions
parameter. As a result, all fields, including the one calculated field, appear in the
destination FDMemTable. Note that the button labeled Clear Destination Field
Structure needed to be clicked prior to calling CopyDataSet with the coStructure
flag, since coStructure cannot be used on an open destination FDDataSet.
358 Delphi in Depth: FireDAC
Figure 12-12: All fields, including calculated fields, are copied to the
destination FDMemTable
The flags associated with aggregates, constraints, and indexes all work in a
similar fashion. Each of these flags are part of a pair. For example for
aggregates, there are coAggregatesCopy and coAggregatesReset flags. When
the coAggregatesCopy flag is in the AOptions parameter, any Aggregate fields
defined in the source dataset will be copied to the destination FDDataSet. By
including the coAggregatesReset flag, any existing Aggregate fields in the
destination FDDataSet will first be removed before source Aggregate fields are
added. The constraint and index related pairs work in the same fashion.
Aggregate fields are discussed in Chapter 10, Creating and Using Virtual
Fields.
Chapter 12: Understanding FDMemTables 359
This brings us to the two remaining flags, coEdit and coDelete. These AOptions
flags are designed to work in situations where the destination dataset already
contains data. According to Delphi’s documentation, if the coEdit flag appears
in AOptions, FireDAC will attempt to locate each source dataset record in the
destination FDDataSet using a primary key, and, if found, will update the
destination record if the records do not match. Similarly, if coDelete appears in
AOptions, FireDAC will attempt to locate records marked for deletion in the
source (this assumes the source is in the cached updates mode), and will delete
them from the destination table.
Note: At the time of this writing (Delphi 10.2 Tokyo has just been released), I
could not confirm the preceding descriptions of the coEdit and coDelete flags.
In my tests it appeared that these flags had no effect. I reported this results to
the RAD Studio team, and they confirmed that there are issues. My
understanding is that these issues will be addressed in the update 1 for RAD
Studio 10.2 Tokyo.
Figure 12-13: The TempCust table in the database is initially empty before
CopyDataSet inserts records into it
Because the CopyDataSet method inserts records into the FDQuery object
whose query references the SELECT * FROM TempCust query, the inserted
records are immediately posted to the underlying database table, as shown in
Figure 12-14.
Chapter 12: Understanding FDMemTables 361
appearing in the selected dataset (assuming that the coStructure, coRestart, and
coAppend checkboxes are checked when using CopyDataSet).
Figure 12-15 shows the FDMemTable1 populated by assigning to it the results
of the EmployeeQuery FDQuery as exposed by the Data property. What you see
here is similar in almost every way to the results produced by clicking the Use
CopyDataSet button, shown back in Figure 12-10.
Please note that the Data property is not configurable in the same way as
CopyDataSet. As a result, the operations invoked by clicking the Use Data
button ignore the configurations represented by the checkboxes that appear to
the left of the button.
While the results shown in Figure 12-15 and Figure 12-10 appear to be identical,
there are very big differences. The first is that the Data property will always
copy both structure and all data. CopyDataSet, on the other hand, can copy only
the structure, or it can copy only data into an existing structure.
364 Delphi in Depth: FireDAC
The second difference is that CopyDataSet can work with any Delphi TDataSet.
The Data property can only be used to move structure and data from one
FDDataSet to another FDDataSet.
The third difference is that CopyDataSet may trigger insert and post events, on a
record-by-record basis. By comparison, copying the Data property from one
FDDataSet to another does not trigger insert, update, or delete events at the
FDDataSet level.
The final difference is that copying data using the Data property is faster than
copying using CopyDataSet. Comparing Figure 12-10 to Figure 12-15, at least
with this small result set, using the Data property is about three times faster than
using the CopyDataSet method, and I suspect that this difference will increase
with the amount of data being copied. This speed results from FireDAC not
having to post updates to the destination FDDataSet on a record-by-record basis.
Note: The speed of CopyDataSet versus using Data was obtained using the Start
and Complete custom methods introduced in Chapter 6, Navigating and Editing.
For a discussion of these methods, please refer back to that chapter.
While I’ve emphasized that using the Data property to copy data and structure
into a FireDAC dataset requires that the source datasets also be a FireDAC
dataset, this fact is worth mentioning again before I leave this topic. The reason
why is that the ClientDataSet component also has a Data property, and this
property also represents data and metadata associated with the ClientDataSet.
However, the ClientDataSet.Data property is incompatible with the Data
property of FireDAC datasets, which is why a ClientDataSet cannot be the
source dataset when copying data to a FireDAC dataset using the Data property.
this feature is a waste of time, since if you cannot save the data, editing it is a
pointless exercise.
The following steps demonstrate editing data at design time. In this instance, I
am going to load data derived from an FDQuery, edit it, and then save it to a
file. Use the following steps to demonstrate this new capability.
1. Select File | New | VCL Forms Application to create a new application.
2. Using the Data Explorer, expand the FireDAC node, then expand the
InterBase node, then expand the Employee, and finally, expand the
Tables node. Your Data Explorer should look something like this:
3. Select the Customer node and drop it onto your new form. Delphi will
respond by creating an FDConnection named EmployeeConnection, and
an FDQuery named CustomerTable.
4. Set the Active property of the CustomerTable FDQuery to True.
5. Using the Tool Palette, drop an FDMemTable onto the new form.
6. Right-click the FDMemTable and select Assign DataSet. Select
CustomerTable from the displayed dialog box and click OK.
366 Delphi in Depth: FireDAC
7. Right-click the FDMemTable once again and select Edit Dataset. Delphi
responds by displaying a form on which a DBNavigator, a DBGrid, and
several buttons appear, as shown in Figure 12-16.
8. Make your edits to the contents of the FDMemTable, and select OK to
save these changes and close the displayed form.
9. Now, you must take one more step or any edits that you made will be
lost. You must save your data. This data can either be saved in the
form’s resource file, or you can save the data to a file. To save the data
to a file, right-click the FDMemTable and select Save To File. Use the
browser to select a file to save the data to and then select Save.
In the next chapter, I take a look at the unique capabilities that FDMemTables
introduce: cloned cursors and nested datasets.
Chapter 13: More FDMemTables 369
Chapter 13
More FDMemTables:
Cloned Cursors and
Nested DataSets
There are certain features that are especially well suited for FDMemTables, and
two of these are cloned cursors and nested datasets. As mentioned in the
preceding chapter, while all FireDAC datasets expose a public CloneCursor
method, this method is intended for FDMemTable use, and tends to raise
exceptions when you attempt to use other FireDAC datasets to invoke this
method.
Similarly, nested datasets are a natural feature for FDMemTables. Other
FireDAC datasets, such as FDQueries, support nested datasets when the
FireDAC driver and the underlying database also support nested datasets, such
are Oracle and PostgreSQL, but those situations are the exception. As a result,
nested datasets are discussed here in the context of FDMemTables.
Note: If you find yourself using nested datasets with FDQueries and a
compatible database, you can use the techniques described in this chapter, such
as using the NestedDataSet property of a DataSetField instance or the
DataSetField property of an FDMemTable, to refer to and work with those
nested datasets.
One alternative is to load two copies of the data into memory. This approach,
however, results in an unnecessary increase in network traffic (or disk access)
and places redundant data in memory.
In some cases, a better option is to clone the cursor of an already populated
FDDataSet. When you clone a cursor, you create a second, independent pointer
to an existing FDDataSet's memory store, including Delta (the change cache, if
cached updates are being employed). Importantly, the cloned FDDataSet has an
independent view, including, but not limited to, current record, filter, index, and
range.
It is difficult to appreciate the power of cloned cursors without actually using
them, but an example can help. Imagine that you have loaded 25,000 records
into an FDMemTable, and you want to compare two separate records in that
FDMemTable programmatically.
One approach is to locate the first record and save some of its data into local
variables. You can then locate the second record and compare the saved data to
that in the second record.
Yet another approach is to load a second copy of the data in memory. You can
then locate the first record in one FDMemTable, the second record in the other
FDMemTable, and then directly compare the two records.
A third approach, and one that has advantages over the first two, is to utilize the
one copy of data in memory, and clone a second cursor onto this memory store.
The cloned FDMemTable cursor appears as if it were a second copy of the data
in memory, in that you now have two cursors (the original and the clone), and
each can point to a different record and utilize a different index. Importantly,
only one copy of the data is stored in memory, and the cloned cursor provides a
second, independent pointer into it. You can then point the original cursor to one
record, the cloned cursor to the other, and directly compare the two records.
The CloneCursor method, like the CopyDataSet method (described in the last
chapter), is introduced in the FDDataSet class, which means that it can be called
by any FDDataSet, not just FDMemTables. For the most part, however, you will
not call CloneCursor with any FDDataSet other than FDMemTables. Trying to
do so often leads to access violations and other issues. When I asked the author
of FireDAC about these issues, he explained that CloneCursor was intended for
use only from FDMemTable. For this reason, I am including the discussion of
CloneCursor here in this FDMemTable chapter.
The following is the declaration of CloneCursor:
Chapter 13: More FDMemTables 371
When you invoke CloneCursor, the first argument that you pass is a reference to
an active FDDataSet whose cursor you want to clone. The AReset and
AKeepSettings are used to either keep or discard the original FDDataSet's view.
If you pass a value of False in both of these parameters, the cloned cursor will
adopt the values of the DetailFields, Filter, Filtered, FilterOptions,
FilterChanges, IndexName, IndexFieldNames, MasterSource and MasterFields
properties, as well as the OnFilterRecord event handler, of the source dataset. If
AReset is True, these properties will assume the default values. If AReset is
False and AKeepSettings is True, the clone will assume these properties, but
they may or may not be entirely valid. For example, the clone may be set to an
index that exists on the source but not on the destination.
Once you have cloned the cursor of an existing FDDataSet to an FDMemTable,
there are two specific uses. The first, and most general, is to use the clone as an
additional, readonly view of the data. This view can have a different current
record, sort order (index), range, and filter.
The second use is as an editable cursor into the cloned FDDataSet’s data store.
This use, however, has some important limitations. First, the cloned FDDataSet
must be in a cached updates mode. Second, you can only apply those updates by
calling the ApplyUpdates method of the originally cloned FDDataSet. Failure to
call ApplyUpdates on the original dataset will result in the loss of updates
applied either by the original FDDataSet or by any of its clones.
In the following two examples, I demonstrate these two uses for cloned cursors.
The first example demonstrates a read-only clone that provides a master-detail
view on a single table. The second example demonstrates an editable clone of an
FDDataSet in cached updates mode.
Master with Detail Clone
This example, one that I really like, is based on a cloned cursor example that I
originally included in my two previous ClientDataSet books, Delphi in Depth:
ClientDataSets (first and second editions). In this example, I use a single query
result set to display a self-referencing master-detail relationship. The main form
of this project is shown in Figure 13-1.
372 Delphi in Depth: FireDAC
FDQuery1 holds a simple query of the Items table from the dbdemos.gdb
database that is found in Delphi’s sample database. Here is the SQL associated
with this query:
The OnCreate event handler of the form modifies the connection found on the
SharedDMVcl data module to point to the dbdemos.db database (note that I’m
not using the employee.gdb database that is used in most of the code samples in
Chapter 13: More FDMemTables 373
this book). This is the connection that the FDQuery, named PartsQuery, is
connected to. Next, the FDMemTable, named OrdersByPartNoMemTable, is
used to clone the query, after which OrdersByPartNoMemTable is configured as
a detail table of PartsQuery. The final step in this OnCreate event handler is to
set the Filtered property of the FDMemTable to True, which will initially have
no effect since the Filter property is blank. Here is that event handler:
Figure 13-2 shows the running form. The cloned cursor is shown in the lower
DBGrid. Since this cloned cursor has been configured to act as the detail table in
a master-detail relationship, the clone only displays those orders where the part
number is the same as that associated with the current record in the upper
DBGrid.
374 Delphi in Depth: FireDAC
Figure 13-2: A single in-memory data store is used to display both the
master records as well as the detail records
If you inspect Figure 13-2, you will see that the current record in the master
table is associated with PartNo 12310. Furthermore, the detail table is displaying
all records in the dataset in which the PartNo is 12310.
The main form also includes a checkbox, whose caption is Include Current
OrderNo. In Figure 13-2, you can see that the current record in the upper
DBGrid is associated with order number 1004, and the detail records include
that order number (as well as order number 1074). If you uncheck the checkbox,
the detail dataset will include only the other orders that contain the selected part
number in the master table, which in this case, would cause order number 1004
to be omitted from the detail view.
This effect is controlled by the OnDataChange event handler of DataSource1,
whose DataSet property points to PartsQuery. Each time you navigate to another
record in the master table, this event handler triggers to update the Filter
property of the detail table, as shown in the following code:
Chapter 13: More FDMemTables 375
Clicking the checkbox also results in a change to the Filter property of the
cloned dataset. However, in this case, it does so by directly calling the
OnDataChange event handler of DataSource1, as shown in the following code:
Editing with cloned cursors and cached updates is demonstrated in the project
named FDCloningCachedCursors. The main form of this project is shown in
Delphi’s designer in Figure 13-3.
This project contains one FDQuery and two FDMemTables. The FDQuery
contains a simple SELECT * FROM Employee SQL statement, and is opened
by clicking on the button whose default caption is Open FDQuery1. This button,
when clicked, will determine whether the query is active or not. If currently
closed, the query will be opened and will be placed in the cached updates state,
so long as the checkbox labeled Cached Updates is checked. If the query is
already active, it is closed. Here is the code associated with the OnClick event of
this button:
Chapter 13: More FDMemTables 377
If you take a close look at Figure 13-4, you might notice that the Cached
Updates checkbox is not checked. As a result, although FDQuery1 displays the
updated data, and the status of the record appears as usModified, the query was
not aware that a change has been posted, and so it did not write this change to
the underlying database upon posting. Unless FDQuery1 is in cached updates
mode when a clone edits the data, the changes made by the clone will not be
respected.
This is a bit complicated, so let me try to be as clear as possible. If FDQuery1 is
open in cached updates mode, and then its cursor is cloned, changes made by the
clone will appear in the change log of FDQuery1, and will be applied when
ApplyUpdates is called on FDQuery1.
Chapter 13: More FDMemTables 379
If FDQuery1 is not in cached updates mode, edits made by a cloned cursor will
appear as though they are in FDQuery1, but will not be written to the database.
Similarly, if FDQuery1 is placed into cached updates mode after a clone has
edited the data, the clone’s changes made prior to FDQuery1’s entry into cached
updates mode will not be in the change cache, and therefore will not be seen as
an edited record that needs to be applied.
The only way that a clone can update the shared data store is to post an edit to
the data while FDQuery1 is in cached updates mode, after which a call to
FDQuery1’s ApplyUpdates method will attempt to write that change to the
underlying database. Only under that condition will FDQuery1’s records edited
by a clone be marked with the update status information necessary for
ApplyUpdates to apply the clone’s edits.
Having said that, any changes made directly to FDQuery1, either
programmatically or though data binding, when it is not in cached updates mode
are immediately written to the underlying database once that change is posted to
FDQuery1, whether or not any clones of FDQuery1 exist. And, any changes
made directly to FDQuery1 while it is in cached updates mode will reside in the
change cache, and a subsequent call to ApplyUpdates will attempt to write that
data to the underlying database.
Furthermore, only the original FDDataSet can write the cached updates to the
underlying database. Calling ApplyUpdates on a cloned cursor will have no
effect.
Being able to edit a common data store using one or more cloned cursors and
cached updates is a powerful capability, one that can provide the basis for
sophisticated features within your applications. However, this is a complicated
topic. Playing around with the FDCloningCachedCursors project can be a good
way to better understand how this mechanism works.
If you’ve never used nested datasets before, you might not see the value in this
type of structure. If that is the case, an example from a real world application
should help.
A trucking company has an application that their drivers use to manage the data
associated with deliveries. Using the application, a driver receives a single
FDMemTable streamed over the Internet to the driver’s laptop, tablet, or phone.
This FDMemTable contains all of the information that the driver may need
during the course of a scheduled delivery route.
The FDMemTable contains one record for each stop (location) on the driver’s
route, and includes the name and address of the drop-off location, and other
information, including a BLOB containing a PDF of the location map. This
record also contains a nested dataset. This dataset includes one record for each
of the deliveries to be made at that location. For example, there might be one or
more customers at a particular address. The record for each customer includes
another nested dataset that contains a list of packages for that customer. The
customer-level nested dataset also includes BLOB fields containing PDFs of the
invoice, bill of lading, and special handling instructions.
Actually, there are two specific reasons for using nested datasets. The first is
that they permit you to use an FDMemTable as a standalone database. In those
cases, nested datasets permit you to store master-detail relationships about
numerous entities in a single FDMemTable that can be written to, and read
from, a file, or be written to a stream and transmitted over the internet to an
endpoint where it is loaded from the stream. In a situation like this, the ability to
persist that data is extremely valuable. For a detailed look at FireDAC dataset
persistence, please refer to Chapter 11, Persisting Data.
The second reason is that it permits you to work with related data from an
underlying relational database in a manner that permits the FDMemTable to
efficiently manage those relationships. For example, the trucking application
gets its data from a traditional database server. However, by moving that data
into an FDMemTable with nested datasets, that same information can be
efficiently communicated to the application used by the truck drivers.
Defining Nested DataSets at Design Time
This section walks you through the creation of nested datasets at design time.
When I began to write this section, I had originally intended to demonstrate the
creation of nested datasets using two different techniques, similar to the design-
time creation of FDMemTable structures I described at the beginning of
Chapter 12, Understanding FDMemTables. Unfortunately, I found that the
Chapter 13: More FDMemTables 381
Figure 13-5: An FDMemTable with a nested dataset has been created using
persistent fields
If you run this project, you will find that you can enter data into the nested
dataset by clicking on Field3 to display an ellipsis button. When you click the
ellipsis button, an automatically generated form is created, which displays a grid
into which you can enter data, as shown in Figure 13-6.
Chapter 13: More FDMemTables 383
You don’t have control over how the automatically created grid looks, so in
most cases, you will provide your own interface to the nested datasets. You do
this the same way you do with any other dataset: You associate data-aware
controls, or use LiveBindings, to connect a user interface element to the dataset,
the dataset being FDMemTable2 in this case.
You can demonstrate this by using the following steps:
1. Return to the project in Delphi’s designer. Adjust the position of
DBGrid1, and then place another DBGrid, a DBNavigator, and a
DataSource onto the form.
2. Place the second DBNavigator beneath DBGrid1, and position DBGrid2
beneath DBNavigator2.
3. Assign DataSource2 to the DataSource properties of both DBNavigator2
and DBGrid2.
4. Finally, point the DataSet property of DataSource2 to FDMemTable2.
When you are done, your form will look something like that shown in
Figure 13-7.
384 Delphi in Depth: FireDAC
Figure 13-7: The lower grid in this figure is pointing to a nested dataset
When you run this project, notice that you can enter data into the top grid, as
well as into the bottom grid. Importantly, the data entered into the bottom grid
will be associated with the nested dataset corresponding to the record appearing
in the top grid.
Note: If you want to suppress the display of the nested dataset field in the top
grid (DBGrid1), you can double click the DBGrid to display the Columns editor,
click Add All Fields to create one TColumn for each field in the connected
dataset, and then select the column associated with the nested dataset and click
the Delete Selected button.
FDMemTable manually, and in the same order. So long as you get that right,
your code should work fine.
Unfortunately, writing code that repeats the steps that you take manually is not
as easy as it sounds. As a result, the two example projects that I show in the
following sections are a bit more complicated than their design-time
counterparts, in that they both create a nested, nested dataset — a top-level
FDMemTable that contains one nested dataset which in turn contains another
nested dataset.
USING FIELDDEFS AT RUNTIME
Defining an FDMemTable’s structure to include nested datasets at runtime using
FieldDefs is demonstrated in the FDRuntimeNestedFieldDefs project. The main
form of this project is shown in Figure 13-8.
procedure TForm1.CreateNestedDataSets;
begin
TopLevelMemTable := TFDMemTable.Create(Self);
MidLevelMemTable := TFDMemTable.Create(TopLevelMemTable);
ThirdLevelMemTable := TFDMemTable.Create(TopLevelMemTable);
with TopLevelMemTable.FieldDefs do
begin
Chapter 13: More FDMemTables 387
Add('TopID', ftInteger);
Add('TopName', ftString, 40);
Add('TopComments', ftMemo);
Add('TopDateInitiated', ftDate);
end;
with TopLevelMemTable.FieldDefs.AddFieldDef do
begin
Name := 'TopNested';
DataType := ftDataSet;
with ChildDefs do
begin
Add('MidID', ftInteger);
Add('MidName', ftString, 30);
with AddChild do
begin
Name := 'MidNested';
DataType := ftDataSet;
with ChildDefs do
begin
Add('ThirdID', ftInteger);
Add('ThirdName', ftString, 25);
Add('ThirdActive', ftBoolean);
end;
end;
end;
end;
//Create the FDMemTable and its nested datasets
TopLevelMemTable.Active := True;
//Hook up the other FDMemTable
MidLevelMemTable.DataSetField :=
TDataSetField(TopLevelMemTable.FieldByName('TopNested'));
ThirdLevelMemTable.DataSetField :=
TDataSetField(MidLevelMemTable.FieldByName('MidNested'));
//Configure the DataSources
DataSource1.DataSet := TopLevelMemTable;
DataSource2.DataSet := MidLevelMemTable;
Datasource3.DataSet := ThirdLevelMemTable;
end;
As you can see from this code, after creating the three FDMemTable objects,
five FieldDefs are added to the top-level FDMemTable using the FieldDefs Add
and AddFieldDef methods.
388 Delphi in Depth: FireDAC
Once the structure of the top-level FDMemTable has been defined, each of the
nested dataset structures needs to be defined. After completing all of the
necessary FieldDef configurations, the Active property of the top-level
FDMemTable is set to True, which creates both the top-level FDMemTables
and the nested datasets.
Finally, this code hooks up the second-tier and third-tier FDMemTables to their
appropriate DataSetFields. Figure 13-9 shows the main form of the
FDRuntimeNestedFieldDefs project at runtime, after the Create DataSets button
has been clicked.
Another interesting aspect of this project, one that it shares with the project
covered in the next section, is that it permits the three-tier FDMemTable to be
saved to disk as well as loaded from a previously saved FDMemTable. When an
FDMemTable includes nested datasets, saving the FDMemTable saves the
nested metadata, data and the change cache (if cached updates are enabled).
USING FIELDS AT RUNTIME
Defining an FDMemTable's structure to include nested datasets at runtime using
persistent fields is demonstrated in the FDRuntimeNestedFields project. The
main form of this project looks exactly like the one shown for the
FDRuntimeNestedFieldDefs project shown in Figure 13-9. In fact, the only
difference between these two projects is the code found on the
CreateNestedDataSets method.
procedure TForm1.CreateNestedDataSets;
begin
TopLevelMemTable := TFDMemTable.Create(Self);
MidLevelMemTable := TFDMemTable.Create(TopLevelMemTable);
ThirdLevelMemTable := TFDMemTable.Create(TopLevelMemTable);
with TIntegerField.Create(Self) do
begin
Name := 'TopID';
FieldKind := fkData;
FieldName := 'ID';
DataSet := TopLevelMemTable;
Required := True;
end;
with TStringField.Create(Self) do
begin
Name := 'TopName';
FieldKind := fkData;
FieldName := 'Name';
Size := 40;
DataSet := TopLevelMemTable;
end;
with TMemoField.Create(Self) do
390 Delphi in Depth: FireDAC
begin
Name := 'TopComments';
FieldKind := fkData;
FieldName := 'Comments';
DataSet := TopLevelMemTable;
end;
with TDateField.Create(Self) do
begin
Name := 'TopDateInitiated';
FieldKind := fkData;
FieldName := 'Date Initiated';
DataSet := TopLevelMemTable;
end;
//Note: For TDataSetFields, FieldKind is fkDataSet by default
with TDataSetField.Create(Self) do
begin
Name := 'TopNested';
FieldName := 'NestedDataSet';
DataSet := TopLevelMemTable;
end;
//MidLevelMemTable
MidLevelMemTable.DataSetField :=
TDataSetField(FindComponent('TopNested'));
with TIntegerField.Create(Self) do
begin
Name := 'MidID';
FieldKind := fkData;
FieldName := 'MidID';
DataSet := MidLevelMemTable;
Required := True;
end;
with TStringField.Create(Self) do
begin
Name := 'MidName';
FieldKind := fkData;
FieldName := 'MidName';
DataSet := MidLevelMemTable;
Size := 30;;
end;
with TDataSetField.Create(Self) do
begin
Name := 'MidNested';
FieldName := 'NestedNestedDataSet';
Chapter 13: More FDMemTables 391
DataSet := MidLevelMemTable;
end;
//Third Level
ThirdLevelMemTable.DataSetField :=
TDataSetField(FindComponent('MidNested'));
with TIntegerField.Create(Self) do
begin
Name := 'ThirdID';
FieldKind := fkData;
FieldName := 'ThirdID';
DataSet := ThirdLevelMemTable;
Required := True;
end;
with TStringField.Create(Self) do
begin
Name := 'ThirdName';
FieldKind := fkData;
FieldName := 'ThirdName';
DataSet := ThirdLevelMemTable;
Size := 25;
end;
with TBooleanField.Create(Self) do
begin
Name := 'ThirdActive';
FieldKind := fkData;
FieldName := 'ThirdActive';
DataSet := ThirdLevelMemTable;
end;
//Create the FDMemTable and its nested datasets
TopLevelMemTable.Active := True;
//Configure the DataSources
DataSource1.DataSet := TopLevelMemTable;
(**)
DataSource2.DataSet := MidLevelMemTable;
Datasource3.DataSet := ThirdLevelMemTable;
(**)
(* This is an alternative way of doing the above
DataSource2.DataSet :=
DataSetField(FindComponent('TopNested')).NestedDataSet;
Datasource3.DataSet :=
TDataSetField(FindComponent('MidNested')).NestedDataSet;
(**)
end;
392 Delphi in Depth: FireDAC
There are two rather slight differences between this code and the corresponding
method in the FDRuntimeNestedFieldDefs project. One is that the DataSetField
properties of the MidLevelMemTable and ThirdLevelMemTable components
are assigned prior to defining the structure of those FDMemTables. This step is
required earlier in the code since each persistent field that you create must
specifically be assigned to an FDMemTable, and that must be a valid
FDMemTable at the time that the Active property of the top-level FDMemTable
is set to True.
The second difference is that this code calls the constructors of TField
descendants, such as TIntegerField, TStringField, and TDataSetField. The use
of those constructors is more verbose than the Add methods associated with
FieldDefs and ChildDefs. As a result, this version of the method is much longer
than the previous one.
Figure 13-10 shows what this project looks like when it is running and the
Create button has been clicked. As you can see, there is no real discernable
difference between what you see in this figure and what is shown in Figure 13-9
(other than no data has yet been added to the datasets in Figure 13-10).
Chapter 13: More FDMemTables 393
What you cannot do is directly work with the nested dataset through its owner
record. For example, in the FDRuntimeNestedFields you cannot do something
like the following:
If you want to refer to a nested dataset by way of the current record, you must
explicitly cast the associated field as a TDataSetField, after which you can
access the NestedDataSet property, a TDataSet reference, of that field. For
example, imagine that TopTab is an FDMemTable containing a nested dataset
field named DataSet. Here is how you can refer to the nested dataset of the
current record of TopTab through its FieldByName method:
var
NestedData: TDataSet;
begin
if not TopTab.FieldByName('DataSet').IsNull then
begin
NestedData :=
TDataSetField(TopTab.FieldByName('DataSet')).NestedDataSet;
if NestedData.State in dsEditModes then
NestedData.Post;
end;
...
The second observation is that, while nested dataset are powerful and useful,
they do introduce limits on how you can work with your data when compared to
those situations where you load data from related tables into two or more
separate FDDataSets.
The primary limit introduced by nested datasets is that they make searching for
data across master table records slow and inconvenient. For example, imagine
that you have written a contact management system using FDMemTables as the
datasets, and want to employ a master-detail relationship between contacts and
their various phone numbers.
One way to do this would be to utilize two tables, one for contacts, and one for
contact phone numbers. Your application would then have to separately load
contacts and contact phone numbers into two separate FDMemTables, and
employ user interface techniques (such as dynamic master-detail links or filters
Chapter 13: More FDMemTables 395
or ranges, see Chapter 9, Filtering Data for details) to display the master-detail
relationship between a given contact and their phone numbers.
An alternative would be to create a single contact FDMemTable, and include a
nested dataset for contact phone numbers. The advantage of this approach is that
you can easily create master-detail views that limit the display of a contact's
phone numbers automatically.
The drawback to this second technique involves your options for working with
the nested data. Specifically, imagine that you get a phone bill and want to know
which contact is associated with a rather expensive call that appears on that bill.
If you load contact phone numbers into a separate FDMemTable, you can use
one of the search techniques described in Chapter 9, Searching Data, to locate
the phone number in the contact phone numbers table, which will quickly reveal
the contact associated with that phone number.
By comparison, if you have embedded contact phone numbers as a nested
dataset in the contacts table, no global search is possible. Your only solution is
to navigate (scan) record-by-record through the contacts table, performing a
separate search on each nested dataset. Such a search would be significantly
slower, on average, than a search on a separate contact phone numbers table.
Does this affect your decision to employ nested datasets or not? The answer is
that it depends on your reasons for using nested datasets. In most cases, nested
datasets are used to represent master-detail data in a shared structure.
Nonetheless, I wanted to bring up this issue because it makes a difference in
some cases.
Chapter 14
The SQL Command
Preprocessor
This feature, which in the past has been called Dynamic SQL, involves a
preprocessor that examines your SQL statements, and under the right conditions,
replaces text within those statements before passing the SQL to the underlying
database. There are two roles played by the SQL command preprocessor. The
primary role of this preprocessor is to modify a somewhat generic SQL
statement in order to accommodate the idiosyncratic syntax of the underlying
database server. Although SQL is 'standardized,' each database supports its own
dialect of SQL, reflecting its particular origin and feature set.
The value of the SQL command preprocessor will be apparent to any developer
who has had to build applications that must work with more than one underlying
database. For example, developers who write vertical market applications, those
designed for a particular industry, and which will be used by many different
companies, often have to support two or more popular databases, such as Oracle
and MS SQL Server. Doing so permits the client companies to use their current
database server for the application's data, rather than requiring those companies
to possibly have to support and maintain (and license) an additional database
server.
The second role of the SQL command preprocessor is to provide you with
enormous flexibility in writing your SQL statements. For example, the
preprocessor permits you to embed macros anywhere within your SQL
statements, so long as you assign valid values to those macros before attempting
to execute the resulting SQL. Similarly, the preprocessor lets you include
FireDAC’s scalar functions within your SQL statements, permitting you to
perform calculations or transformations that might otherwise be impossible.
Without involving FireDAC's SQL command preprocessor, databases generally
support limited flexibility in the SQL that they execute. This flexibility is
provided in the form of parameters, which are placeholders for values that can
appear in the predicates of the WHERE clause of SQL SELECT and DELETE
398 Delphi in Depth: FireDAC
queries, the SET clause of UPDATE queries, and the VALUES part of INSERT
queries. In general, parameters come in two forms, named and unnamed. A
query that employs at least one parameter is referred to as a parameterized
query.
In addition to providing some flexibility, parameterized queries can provide a
performance advantage. Specifically, when you are going to execute the same
basic query, albeit with different values in the parameters, the query only needs
to be prepared once. For subsequent executions, only the new parameter values
need to be communicated to the underlying database, and this saves time.
While parameterized queries offer some flexibility and potential performance
enhancements, they are limited. Specifically, parameters can be used in place of
literal values in SQL statements (such as the predicates in WHERE clauses), but
cannot be used in place of identifiers, such as table names or column names.
By comparison, the FireDAC's SQL preprocessor permits a great deal of
flexibility (though no specific performance benefits are realized beyond those
gained through traditional SQL parameters). For example, features supported by
the SQL preprocessor permit flexibility in both the SELECT clause and the
FROM clause of SELECT statements, the UPDATE clause of UPDATE
statements, and the target of DROP statements.
When the ResourceOptions.MacroCreate and ResourceOptions.MacroExpand
properties of FireDAC datasets is set to True (the default), the SQL command
preprocessor searches the SQL statement for macros and special escape
characters that identify an operation it needs to perform. When it detects a
macro or an escape sequence, it replaces the identified part of the SQL statement
with new text, which it then passes along to the underlying database.
Importantly, the underlying database never sees the original text that includes
the macros or escape sequences.
The SQL preprocessor supports two distinct types of operations. These are:
Macro substitution
Escape sequences
These operations are discussed in the following sections.
In order to view the workings of the SQL command preprocessor, I have built a
little application that will display the original query, as well as the query as it
appears once the preprocessor does its job. The main form of this project is
shown in Figure 14-1.
Code: The FDSQLPreprocessor project can be found in the code download. See
Appendix A for details.
Unlike almost every other project described in this book, this particular project
is one that you have to configure at design time. Specifically, you need to define
the query and assign any required values to variables that you have declared, all
at design time. You then run the project and click the button labeled Show
Processor Work. The event handler on this button displays the original query in
the upper memo field, and prepares the query. After this, it displays any defined
macros in the provided list box, as well as the post-processed query in the lower
memo field. The post-processed query can be read from the FDQuery’s Text
property. This event handler is shown here:
begin
Memo1.Lines.Clear;
Memo2.Lines.Clear;
Memo1.Lines.Add( FDQuery1.SQL.Text );
FDQuery1.Prepare;
ListBox1.Clear;
for i := 0 to FDQuery1.MacroCount-1 do
begin
Macro := FDQuery1.Macros[i];
ListBox1.Items.Add( 'Name: ' + Macro.Name + ', Type : ' +
GetEnumName(TypeInfo(TFDMacroDataType),Ord(Macro.DataType)) +
', Value: ' + Macro.Value );
end;
Memo2.Lines.Text := FDQuery1.Text;
end;
Macro Substitution
When you employ macro substitution, you embed macros in your SQL
statements, even in those places where traditional SQL parameters are not
allowed. For example, the use of macro symbols in your SQL statements
permits you to write queries whose tables, fields, or WHERE clause predicates
are not known until runtime. As mentioned earlier, your entire SQL statement
may consist of a single macro. All you need to do is ensure that you have bound
a valid value to each macro appearing in your SQL statement before executing
the query, and the command preprocessor will take care of updating the
resulting SQL.
In the case of an SQL statement consisting of a single macro, that macro must
be replaced by a valid and complete SQL statement before execution, which
really makes no sense. If you are going to the trouble of replacing a single
macro with an entire SQL statement, you might as well just assign a valid SQL
statement to your FDQuery’s SQL property at runtime, instead of messing
around with a macro.
Macros are identified by either the ! character or the & character, which you
follow with the macro name. Prior to executing a query, you must provide a
valid value for each of its macros using the Macros collection property of the
dataset.
For example, consider the following SQL statement, which includes one macro:
Prior to executing this query, a statement similar to the following can be used to
replace the &Fields macro with a suitable SELECT clause:
The use of the indexed Macros property in the preceding statement require you
to know that the macro to which you were assigning a value was the first macro
appearing in the SQL statement. Alternatively, you can use the MacroByName
method to assign a value to a macro. When using MacroByName, you omit the
& or ! character, the character that identifies the macro, as shown here:
Figure 14-2 shows how this query, and its processed result, appear in the
FDSQLPreprocessor project.
In the preceding examples, the macro name was preceded by the ampersand
character (&). In this mode, referred to as SQL substitution, how the data is
inserted into the SQL is typed, meaning that the macro can represent itself
according to its data type in the SQL statement. This typing can be done
402 Delphi in Depth: FireDAC
programmatically, but is most obvious when you view the Macro tab of the SQL
Editor, where you can specify a data type for a macro.
By comparison, when you precede the macro name with the exclamation point
character (!), a mode referred to as string substitution, the value assigned to the
macro is inserted as-is, where the value being concatenated into the SQL string
is a string, without any conversion based on the underlying database.
While SQL substitution and string substitution were originally intended to be
different, it turns out that they work the same. If you use the macro data type of
mdRaw at design time, or assign to the AsRaw property of the macro at runtime,
you get the intended behavior of string substitution, regardless of which
character (! or &) you use to define the macro. Likewise, using one of the other
data types will perform the proper conversion, based on your connected
database, whether you use ! or &.
Macro substitution is demonstrated in a second project named
FDMacroSubstitution. This project generates an SQL statement whose field list
and table name is created dynamically at runtime. The main form of this project
is shown in the Figure 14-3. The table shown on the right side of this figure is
the result of the SQL statement execution.
Chapter 14: The SQL Command Preprocessor 403
Figure 14-3: The fields that appear in the result set are selected at runtime
The actual query that is executed to create the tabular view on the right side of
this form is shown here. As should be obvious, &FieldList and &TableName are
macros:
You assign a value to one of these macro at runtime by selecting the macro
name in the Macros property editor and then setting the Value property using
the Object Inspector, as shown here:
You can also access, and assign values to, macros using the Query Editor.
Figure 14-4 shows the Query Editor with the Macros tab selected.
Chapter 14: The SQL Command Preprocessor 405
While you may set macros at design time in order to create a default query, it is
the runtime assignment of data to the macros that creates the greatest flexibility.
The following is the LoadData method, which is called from the button labeled
Show Data seen in Figure 14-3. This method generates a field list from the
fields selected in the Fields list box, and uses the table name selected in the
Tables list box:
procedure TForm1.LoadData;
var
406 Delphi in Depth: FireDAC
i: Integer;
FieldList: string;
ListBoxItem: TListBoxItem;
begin
DataModule2.SelectQry.Close;
for i := 0 to ListBox2.Items.Count - 1 do
if ListBox2.ListItems[i].IsSelected then
BuildList(ListBox2.ListItems[i].Text);
DataModule2.SelectQry.Macros[0].AsRaw := FieldList;
DataModule2.SelectQry.MacroByName('TableName').AsRaw :=
ListBox1.Selected.Text;
DataModule2.SelectQry.Open;
end;
Note: When using macro substitution, please use great care if your text comes
from user input. Macro substitution can open you to many of the same dangers
that exist with the SQL injection hack.
Escape Sequences
Escape sequences are strings that you embed in your SQL which begin with an
open curly brace character ({) preceded by a white space character, and ends
with a close curly brace (}). This pair of characters signal the SQL command
preprocessor that it should take an action. If you need to include the { or }
characters in some other part of your SQL statement, you must precede that
character with an escape character. This process is described in greater detail
later in this chapter in the section Special Character Processing.
There are four specific types of escape sequences in FireDAC. These are:
Constant substitution
Chapter 14: The SQL Command Preprocessor 407
Identifier substitution
Conditional substitution
FireDAC scalar functions
Each of these types is described in the following sections.
Constant Substitution
Constant substitution permits you to identify a literal value in your SQL
statement. The type of literal is defined by an identifier that precedes the literal,
and this identifier is case insensitive. Prior to submitting the processed SQL
statement, the preprocessor ensures that the constant value is formatted
appropriately for the underlying database.
FireDAC supports constant substitution for six types. These are shown in Table
14-1.
For example, assuming that the FromTS field is a date/time type field, a
timestamp, for instance, the following query will select records where the
FromTS field contains values equal to or later than the first minute of the first
day of January in the year 2017, and where the ThruTS field is null. This will
408 Delphi in Depth: FireDAC
work regardless of the format that the underlying database requires for date/time
constants.
The results of the processing of this query are shown in Figure 14-5.
Identifier Substitution
You use identifier substitution to reference a field name or table name in your
SQL statement using delimiters appropriate to the underlying database. For
example, when a field name or table name includes white space, or some other
non-standard characters, you often have to enclose the name in double quotes,
square braces, or some other similar delimiter, depending on the database to
which you are connected.
To use identifier substitution, enclose the field name or table name within curly
braces and preceded by the characters 'id.' For example, consider the following
SQL statement:
Chapter 14: The SQL Command Preprocessor 409
Since the First Name field includes white space, it must be delimited prior to
sending it to the underlying database. The use of identifier substitution ensures
that delimiters appropriate for your connected database are substituted. As a
result, if the underlying database is InterBase, the SQL statement will look like
that shown in Figure 14-6.
Conditional Substitution
Conditional substitution permits you to include statements that FireDAC will
substitute during preprocess with SQL syntax appropriate to the underlying
database. In some cases, however, the correct syntax can only be known by you,
in which case, you must provide the various alternative SQL segments that will
be substituted at runtime.
For instance, imagine that your application supports both InterBase and MS
SQL Server. Suppose further that at least one table that your application uses is
not under your control (it was created by your client), and that the date field of
410 Delphi in Depth: FireDAC
this table (Orders) is named OrdDate in your InterBase database, but is named
OrderDate in the MS SQL Server database.
FireDAC supports two flavors of conditional substitution. In the first, you list
two or more database identifiers, preceded by the characters if or iif, and
followed by two or more corresponding SQL statements that you want to
substitute conditionally. For example, the following can be used to select the
OrderID and order date fields from the underlying database (se Figure 14-7),
where the order date field is named ORDER_DATE in InterBase and
ORDDATE in MS SQL Server:
In most cases, you will include one value for each database identifier. However,
you can include one extra value, at the end, which will be used in the case that
none of the identifiers match the connected database. For example, in the
following version, which also uses 'if' instead of 'iif,' the field name ODate will
be used if the connected database is neither MS SQL nor InterBase.
SELECT OrderID,
{if (MSSQL, OrdDate, INTRBASE, ORDER_DATE, ODate) }
FROM Sales;
Chapter 14: The SQL Command Preprocessor 411
FIREBIRD Firebird
INFORMIX Informix
INTRBASE InterBase
The use of the fn characters is optional, but can be helpful in making it obvious
that a scalar function is being called. Consequently, the following statement is
functionally identical to the preceding SQL statement:
The following illustration shows the Macros property of the FDQuery that
executes this query, depicting the assignment of the value to the &TabName
variable.
expression engine is the feature that permits you to include FireDAC scalar
functions in expressions, specifically filter expressions, filter-based index
expressions, calculated fields, and custom expressions produced by the
FireDAC expression evaluator. When FireDAC scalar functions are used in
expressions, the use of the curly braces is optional. In other words, the scalar
function used in expressions can appear inline with or without being enclosed
the curly braces.
The following is an example of the LENGTH string/character scalar function
being used in a filter expression. This example was shown in Chapter 9,
Filtering Data:
Note: Custom expressions created with the FireDAC expression evaluator are
beyond the scope of this book. For information on custom expressions, please
refer to the section Writing Expressions in the FireDAC help.
Function Returns
ASCII(string) The ASCII code value of the leftmost character of
the string or character as an integer.
Function Returns
ABS (number) The absolute value of number.
Function Returns
CURDATE( ) The current date.
Function Returns
DATABASE( ) The name of the connected database.
CONVERT(value_exp, data_type)
As you can see, the CONVERT function takes two parameters, a data value and
an identifier that indicates the data type to which you want to convert the value
in the first parameter. The value of the first parameter can be a field identifier, a
literal value, or the result of an expression evaluation, including expressions that
include FireDAC scalar functions.
Valid CONVERT function identifiers are listed in Table 14-7.
Chapter 15
Array DML
This is a short chapter, so I am going to begin it with a true story. A coworker
approached me recently, looking for advice on how to most efficiently insert a
large number of records into a table. Since we were using FireDAC as our data
access mechanism, I suggested Array DML, a feature of FireDAC that supports
the batch processing of parameterized queries supported by a large number of
databases, including the one that we were using.
I explained how Array DML worked, and provided him with a code sample that
demonstrated essentially what he wanted to achieve. Later that day he returned
to my office, sporting a big grin. “It worked,” he said. I asked him to be more
specific, and he described how he succeeded in inserting more than 18,000
records into a database table in about two seconds.
Now that’s performance. I can’t guarantee that everyone can achieve numbers
like these, but if you are going to try, Array DML is the way to go, if it’s an
option for you.
Array DML provides a mechanism that supports high-speed data manipulation
using parameterized query and stored procedure execution. The DML part of the
name “Array DML” name refers to SQL DML statements, where the initials
DML stand for Data Manipulation Language, as opposed to SQL DDL, which
stands for Data Definition Language.
In its most basic form, there are two parts to using Array DML. The first part is
defining a parameterized SQL DML statement, such as a parameterized
UPDATE or INSERT statement. Similarly, you can use a parameterized stored
procedure call.
The second part involves configuring an array to hold the parameter values. This
array has one element for each time you want the query or stored procedure to
be executed, and each element consists of a set of one or more parameters, the
number of which depends on how many parameters are found in the associated
query or stored procedure that you need to execute.
Array DML is something that you will use only when you need to execute the
same query or stored procedure repeatedly, albeit with a different set of
426 Delphi in Depth: FireDAC
parameter values upon each execution. There are, however, many situations in
which you need to do just that. For example, data warehousing applications
often involve extracting data from the live database, manipulating that data, and
then inserting the processed data into the data warehouse database. These
operations are referred to as ETL (extract, transform, and load) operations, and
they are perfectly suited for Array DML.
There are two principle advantages of Array DML. For those databases that
directly support these types of batch command operations, the amount of
communication between your application and the database server can be
dramatically reduced, and optimizations on the server can produce exceptional
performance.
The second advantage is that Array DML provides a coherent framework for
managing the manipulation of large amounts of data. The benefits of this
framework include automatic record commitments, transaction rollbacks, and
error tracking. These benefits can be realized even when the underlying database
does not natively support batch command operations.
Most of the more popular servers for which FireDAC provides drivers support
Array DML natively. These include InterBase (XE3 and higher), Firebird (2.1
and higher), Microsoft SQL Server, Oracle, IBM DB2, Informix, MySQL,
Sybase SQL Anywhere, PostgreSQL (8.1 and higher), and SQLite (3.7.11 and
higher when Params.BindMode is set to pbByNumber). The other database
engines supported by FireDAC also permit you to use Array DML techniques.
However, for these servers, the operations are emulated by FireDAC, and do not
necessarily provide a performance benefit. As mentioned earlier, however, the
Array DML framework has additional benefits, making it useful even when not
natively supported by the underlying database.
This project uses SQL queries to create a table that contains the date and that
date's day of the week for every day between two dates, inclusively. By default,
the project initializes the start date to the current date, and the end date to be ten
years after the start date. As a result, by default this project will need to insert
over 3600 records into the target table.
There are two methods in this project that prepare the database before the
records can be inserted. One of these is the OnCreate event handler for the main
form. Two TDateTimePicker components are initialized from within this
method, which are used to produce a range of dates. This date range is used to
define the records that will be inserted into a target database table.
The second method prepares the table into which those date records will be
inserted. The table is dropped if it already exists. Next, a new instance of this
428 Delphi in Depth: FireDAC
One of the buttons on the main form initiates the process of Array DML. This
button is labeled Create And Populate Table Using Array DML, and its OnClick
event handler is shown here:
The method that performs the record insertion using Array DML is called
PopulateTableArrayDML, and the code associated with that method is shown in
the following code segment:
//Stop timer
StopWatch.Stop;
As you can see in this code segment, once the basic parameters of execution are
calculated, a parameterized INSERT statement defines how data will be inserted
into the underlying table.
Next, the query parameters themselves are configured. In this case, this is an
optional step as FireDAC will determine the parameter types based on the
values assigned to the associated array. However, it is often helpful if you
specifically identify the types of parameters you are defining, and, when one or
more parameters are strings, you can optimize the operation by specifying the
size of string parameters.
Before any parameter values can be assigned, it is necessary to configure the
size of the Params array. You do this by assigning an integer value to the
FDQuery's Params.ArraySize property. At a minimum, this value must be as
large as or larger than the number of records you need to insert.
Finally, Array DML is triggered by calling the FDQuery's Execute method, to
which you pass an integer indicating the size of the array that you have created.
Before each time the query is executed, data found in the Params array is bound
to query's parameters.
The Execute method has a second, default parameter, which is 0 by default. This
value represents the offset, which defines the array element from which to start
the execution. With the default value of 0, the query executes beginning with the
parameter values found in the first element of the array.
Figure 15-2 shows the results of a sample run of this project using the Array
DML technique. A TStopWatch was used to gauge performance, and in this
instance, more than 3500 records were inserted in about 119 milliseconds.
Chapter 15: Array DML 431
Figure 15-2: Using Array DML permitted over 3,500 records to be inserted
in about 119 milliseconds
The second button that appears in this project is labeled Create And Populate
Table Using Queries. The event handler for this button is shown in the following
code segment. As you can see, the only difference from the Array DML version
of the button is that the PopulateTableUsingQueries method, instead of
PopulateTableArrayDML, is invoked:
CreateTable(TableName);
PopulateTableUsingQueries(TableName);
end;
Figure 15-3: More than 3,500 records were inserted using a parameterized
query in about 798 milliseconds
If you compare the two preceding figures, you will notice a very dramatic
difference in performance between these two methods for inserting records.
When Array DML is used, records are inserted in about a tenth of a second. By
comparison, when parameterized queries are used, it takes about three quarters
of a second to perform the same insertion. In other words, Array DML is more
than four times faster than prepared parameterized queries. Of course, this
project uses InterBase, which natively supports batch command operations.
To be honest, I really tried to make the parameterized version as fast as possible.
I tried explicitly preparing the parameterized query in advance of execution, but
that made things slower, so I removed the call to prepare. No matter what I did,
the results that you see in Figure 15-3 were the best I could manage on my test
machine (which was running InterBase locally. If I used a remote server I would
expect the advantage of Array DML over traditional parameterized queries to be
significantly greater). Nonetheless, Array DML was consistently faster, and by
very significant amounts of time.
434 Delphi in Depth: FireDAC
So how does Array DML do when used with a database that does not natively
support batch command operations? Well, one database about which I have
written a number of books, the Advantage Database Server (ADS) is one such
database. I modified this project to execute against the Advantage Database
Server (ADS). In those tests, Array DML showed a small and almost negligible
performance benefit (530 milliseconds versus 560 milliseconds). Nonetheless,
over repeated executions the Array DML method showed a slight, but consistent
performance improvement over the parameterized query version.
Speaking of performance, it's worth noting that ADS is based on ISAM
(Indexed Sequential Access Method) technology, which is generally faster than
SQL-based servers when it comes to creating and modifying records. This
explains why the parameterized query results with ADS were faster than with
InterBase. But a side note to this discussion is that it is worthwhile testing
various scenarios if you are concerned about improving performance.
When called, ASender is the FireDAC dataset for which Execute was called,
ATimes is the size of the Array DML array (and may not match the Times
parameter that you passed in your call to the Execute method), and AOffset is
the index of the parameters array whose query was executing when the error
was encountered.
AError is the exception that was raised, and in the case of aeCollectAllErrors,
AError provides access to an array of exceptions. Finally, AAction permits you
Chapter 15: Array DML 435
to control how Array DML proceeds. If you set AAction to eaFail, the Array
DML operation will be aborted. Set AAction to eaSkip to skip this item and
continue execution with AOffSet + 1. If you determine that the error was called
by a bad parameter in the parameters array, you can edit the parameters at this
offset in the parameters array and set AAction to eaRetry.
When ArrayDMLMode is aeUpToFirstError, FireDAC will stop executing
queries upon encountering the first error. At this point, the RowsAffected
property of the FireDAC dataset will indicate how many rows were successfully
applied, and the AError parameter of the OnExecuteError event handler will
hold the exception or exceptions associated with the one failed record. In
addition, the AError.Errors[0].RowIndex property will hold the index value of
the parameters array of the query that generated the first error.
An ArrayDMLMode of aeOnErrorUndoAll will cause FireDAC to stop
processing records and rollback any changes that had been applied. Once rolled
back, FireDAC will restart the process using a mode similar to
aeUpToFirstError. If errors are encountered in this mode, handle them as you
would errors encountered during an aeUpToFirstError mode.
In the final mode, aeCollectAllErrors, FireDAC attempts to apply all records,
continuing even when errors are found. At the conclusion of execution, all
applied records are committed, the FireDAC dataset RowsAffected property
returns the number of rows successfully applied, and you can use the
AError.Errors[i].RowIndex property to determine which elements of the
parameters array encountered an error. For example, the following code will
display, one at a time, the offsets in the parameters array that cause errors:
IBM DB2, Informix, and Microsoft SQL Server support the ArrayDMLMode of
aeCollectAllErrors. Oracle, Firebird (2.1 and higher), and PostgreSQL (8.1 and
higher) support aeOnErrorUndoAll. The remainder support aeUpToFirstError
(with the exception of SQLite version 3.11.7 and higher which emulates batch
436 Delphi in Depth: FireDAC
In the following next chapter, you will find a detailed discussion of cached
updates.
Chapter 16: Using Cached Updates 439
Chapter 16
Using Cached Updates
When enabled, cached updates hold changes made to the records of FireDAC
datasets in cache. These changes can then be examined, altered, discarded, or
applied to the underlying database. By comparison, when cached updates are not
enabled, changes made to records in a FireDAC dataset are written back to the
underlying database on a record-by-record basis.
Cached updates are a feature of all FireDAC datasets. Specifically, cached
updates can be employed by FDQuery, FDStoredProc, and FDMemTable
components. While cached updates can also be used by FDTable components,
doing so is somewhat limited compared to the use of FDQueries. As a result, the
FDQuery component is the preferred component for working with cached
updates.
Cached updates permit you to perform a variety of operations that would
otherwise be difficult or impossible. For example, when using cached updates, it
is possible to apply changes to many records, and even many underlying tables,
within a transaction, ensuring that those changes are applied in an all-or-none
fashion. Cached updates also permit you to programmatically examine the one
or more records that a user has changed, ensuring that those changes are
meaningful and consistent, before attempting to save the data.
Cached updates also permit you to edit data in unconventional ways. For
example, you might permit your users to edit data acquired from one or more
tables. However, after the user has made their changes, you might write that data
to a completely different set of tables, or even to a different database.
In addition, cached updates let you get creative when the edits contain errors
that prevent the data from being written to the underlying database. For instance,
if a user makes a change that violates a referential integrity rule in the database,
your code can hold those changes in memory while the user makes corrections.
You can then attempt to re-apply those changes.
There is one more really big feature of cached updates. The changes made to
records can be cached over more than a single session. For example, a user can
query a database and start making changes in the morning. The user may then
440 Delphi in Depth: FireDAC
close the application, and re-open it later in the day (or on some other day for
that matter) to make more changes. At some arbitrary time in the future, the user
could review all of his or her changes, and then ask to have those changes
applied to the underlying database. Of course, this requires that you persist the
data between editing sessions, which means either saving the data to a file or
streaming it to some storage location. That process is discussed in Chapter 11,
Persisting Data.
Most of the features of cached updates are demonstrated in the FDBasicCache
project. The main form of this project is shown in Figure 16-1.
Code: You can find the FDBasicCache project in the code download. See
Appendix A for details.
Entering and exiting the cached updates mode is demonstrated in the following
code which is associated with the button whose initial caption reads Enable
Cached Updates:
Figure 16-2 shows how the main form looks at runtime, after placing the
FDQuery into the cached updates state, but before any changes have been
posted.
444 Delphi in Depth: FireDAC
There is a calculated field in FDQuery1, and this field is used to display the
status of individual records on the form. The following is the OnCalcFields
event handler for this dataset, which uses UpdateStatus to determine each
record's status:
case DataSet.UpdateStatus of
usUnmodified:
DataSet.Fields[DataSet.FieldCount -1].AsString :=
'Unmodified';
usModified:
DataSet.Fields[DataSet.FieldCount -1].AsString :=
'Modified';
usInserted:
DataSet.Fields[DataSet.FieldCount -1].AsString :=
'Inserted';
usDeleted:
DataSet.Fields[DataSet.FieldCount -1].AsString :=
'Deleted';
end;
end;
If you are not already familiar with cached updates, you may have found
something in the preceding code puzzling. Specifically, how it is possible to
display a calculated field for a deleted record? I mean, it has been deleted, right,
so it no longer exists. Well, that’s one of the cool things about cached updates.
The record is marked for deletion, and will be deleted from the database if the
updates are successfully applied, but the record still exists so long as it is in
cache.
Do you want to see which records were deleted before cached updates are
applied? It is possible by changing the FilterChanges property of the FireDAC
dataset. This property is of the TFDUpdateRecordTypes type. The following is
the declaration of TFDUpdateRecordTypes:
When this property contains the rtDeleted flag, those records that have been
deleted will be present in the dataset, along with other records associated with
any other flags in the FilterChanges property, and will be visible if the dataset is
being displayed using a data-aware control. The following code uses the
FilterChanges property of an FDMemTable to display only those records that
have been modified, inserted, or deleted:
446 Delphi in Depth: FireDAC
Figure 16-3 shows the calculated field displaying the change type, as well as a
filtered FDMemTable based on FDQuery1 that displays modified, inserted, and
deleted records.
Figure 16-3: The FilterChanges property permits you to see your changes,
including deleted records
Chapter 16: Using Cached Updates 447
Figure 16-4 depicts the FDBasicCache project with some changes, and a
modified record selected in the FDQuery. The original and current values of the
Customer field are displayed in the status bar.
448 Delphi in Depth: FireDAC
end;
It is worth noting that once EnableControls is called for the FDQuery, the
DataSource's OnDataChange event handler triggers. If the previously saved data
included cached records, the call to UpdateButtons found in the OnDataChange
event handler causes the buttons associated with manipulating the cache to be
enabled once more.
end;
As you can see, the UndoLastChange method takes a single Boolean parameter.
When True is passed, and undoing the last change changes the position of the
current record, the current record will remain the current record, and the cursor
will shift position within the dataset. If False is passed, the current record may
appear to fly away if the restored value requires the record to appear in a
different location within the dataset.
Canceling a Specific Change
While UndoLastChange implements a LIFO (last in, first out) type operation, it
is possible to undo the change to any record in cache by calling RevertRecord.
For example, you can use FilterChanges to display only deleted records, and
then restore any given deleted record by making it the current record and then
calling RevertRecord.
The use of RevertRecord is shown in the following event handler. So long as the
current record has been changed in some way, this event handler will restore it
to its original state (an inserted record will be removed):
Applying Updates
One of the primary reasons for caching updates is to perform a batch application
of those edits to an underlying database. While that might sound obvious, it is
worth pointing out that cached updates don’t always result in the application of
changes to the data. Sometimes the changes are discarded because the user
realized that they were working on the wrong set of records. In other
circumstances, your use of cache updates might be to observe a user’s editing
behavior without any intention of saving those edits. Granted, this latter scenario
may be rare, but it’s not out of the question.
Having made those qualifications, it is true that in most cases the end result of a
cached updates session is to attempt to write the cached changes to a database.
And there are two general approaches to doing this: The brute force method and
calling the ApplyUpdates method.
Apply Updates Using Brute Force
I mentioned the brute force method in passing earlier in this chapter when
introducing the CommitUpdates method. The brute force method involves your
systematically filtering the cache on the changes it contains, and manually
writing the changes to the underlying database.
For example, you might first filter the cache on inserted records, and then
execute one SQL INSERT statement for each record in the filtered set. Next,
you could filter on deleted records, issuing one SQL DELETE statement for
each of those records. Finally, you would filter on modified records. After doing
so, you would use the OldValue and CurValue properties of the underlying
TFields to determine which fields were changed, and then construct the
appropriate SQL UPDATE statements to apply those changes. When you are
done updating the data, you would call CommitUpdates to flush the cache and
mark all of the records in memory as unchanged (or call CancelUpdates and
then refresh the datasets).
The brute force method requires a significant amount of code, and you are
responsible for handling errors if they arise. As a result, the brute force method
is typically employed when the use of the ApplyUpdates method is impractical
or impossible.
Calling the ApplyUpdates Method
When you call ApplyUpdates, FireDAC examines the contents of the cache and
attempts to apply each of the changes, one at a time. When using ApplyUpdates
there are two modes: Automatic updates, where you permit FireDAC to generate
454 Delphi in Depth: FireDAC
and execute the queries that will apply the updates, and the manual mode, where
you write an OnUpdateRecord event handler, where your code applies the
updates, but which is called by FireDAC, once for each update in cache that
needs to be applied.
I am going to begin this section with a discussion of the process of calling
ApplyUpdates, and I will cover the use of the OnUpdateRecord event handler
later in this section.
Since handling errors during the call to ApplyUpdates is more complicated than
when a single record is being updated, I have created a project named
FDCachedUpdatesErrors. This project can help you to better understand the
process of calling ApplyUpdates, and what happens when there are errors in the
cache. The main form for this project is shown in Figure 16-5.
Because this project needs to support both successful posts as well as failures, I
have created a very simple table in the employee.gdb database. This table is
Chapter 16: Using Cached Updates 455
finally
FDQuery.Free;
end;
TwoFieldsTable.SQL.Text := 'SELECT * FROM ' + TabName;
{$IF CompilerVersion >= 30.0} // Delphi 10 Seattle or later
TwoFieldsTable.UpdateOptions.AutoCommitUpdates := True;
{$ENDIF}
TwoFieldsTable.UpdateOptions.CheckRequired := False;
TwoFieldsTable.UpdateOptions.CheckReadOnly := False;
TwoFieldsTable.UpdateOptions.CheckUpdatable := False;
TwoFieldsTable.CachedUpdates := True;
TwoFieldsTable.Open;
{$IF CompilerVersion < 30}
cbxAutoCommitUpdates.Enabled := False;
{$ELSE} // Delphi 10 Seattle or later
TwoFieldsTable.UpdateOptions.AutoCommitUpdates :=
cbxAutoCommitUpdates.Checked;
{$ENDIF}
CurrentTwoFieldsTable.Open('SELECT * FROM ' + TabName );
end;
In addition to the button that you press to apply updates to the TwoFieldsTable
FDQuery, there are two additional buttons that permit you to reset the form to
its initial state, which I found useful while testing various forms of the
ApplyUpdates method. The first button, labeled Cancel Updates, clears the
current contents of the cache by calling the CancelUpdates method. The second
button, labeled Cancel Updates And Empty TwoFieldsTable, both clears the
cache and deletes all of the current records from the TwoFieldsTable table in
employee.gdb. The event handlers for these two buttons are shown here:
procedure TForm1.btnCancelUpdatesAndEmptyTwoFieldsTableClick(
Sender: TObject);
Chapter 16: Using Cached Updates 457
begin
TwoFieldsTable.CancelUpdates;
EmployeeConnection.StartTransaction;
try
EmployeeConnection.ExecSQL('DELETE FROM TWOFIELDSTABLE');
EmployeeConnection.Commit;
except
EmployeeConnection.Rollback;
raise;
end;
TwoFieldsTable.FilterChanges := [ rtUnmodified, rtModified,
rtInserted, rtDeleted, rtHasErrors ];
TwoFieldsTable.Refresh;
CurrentTwoFieldsTable.Refresh;
StatusBar1.SimpleText := 'The cache was canceled and ' +
'TwoFieldsTable was emptied';
end;
the records that were actually applied were still marked as modified (or inserted
or deleted) in cache. This made dealing with errors very difficult.
When AutoCommitUpdates is set to True (the default is False, so you must set
AutoCommitUpdates to True), records that are successfully applied are removed
from the cache (assuming that the call to ApplyUpdates has not been aborted
due to excessive errors. I’ll come back to this issue). This behavior, which is
similar to how the ClientDataSet operates when its ApplyUpdates method is
called, is highly desirable. It is so desirable that I strongly recommend that if
you are using Delphi XE8 or earlier, you upgrade your copy of Delphi so that
you can make use of FireDAC’s cached updates capabilities.
In the following discussion, I am going to assume that you are using Delphi 10
Seattle or later, and have set AutoCommitUpdates to True. The easiest way to
do this is to change this property to True in your FDConnection, or better yet, in
your FDManager. If you do not do this, or you cannot upgrade to Delphi 10
Seattle or later, you should use the FDCachedUpdatesErrors project, or an
adaptation of it, in order to completely understand how to successfully manage
the cache when errors are present during a call to ApplyUpdates.
Ok, now that we’ve got that out of the way, let’s talk about ApplyUpdates. This
method has the following signature:
Note: When I say all applied updates are removed from cache, I again am
referring to Delphi 10 Seattle and later with the
UpdateOptions.AutoCommitUpdates property set to True. Since this assumption
is made for the remainder of the discussions in this chapter, I am not going to
mention it again.
Figure 16-6: Two records are valid and two will fail to post
Here is the OnClick event handler associated with the button labeled Call
ApplyUpdates with MaxErrors:
begin
if TwoFieldsTable.State in dsEditModes then
TwoFieldsTable.Post;
NumErrors := TwoFieldsTable.ApplyUpdates(
StrToInt( edtMaxErrors.Text ) );
TwoFieldsTable.FilterChanges := [ rtModified, rtInserted,
rtDeleted, rtHasErrors ];
CurrentTwoFieldsTable.Refresh;
StatusBar1.SimpleText := 'There were ' + NumErrors.ToString + '
' errors encountered. Records remaining in cache: ' +
TwoFieldsTable.ChangeCount.ToString;
end;
When you click the button labeled Call ApplyUpdates with MaxErrors, and you
have set MaxErrors to -1, all successfully applied records are removed from
cache, and the unsuccessfully applied records remain, as shown in Figure 16-7.
Figure 16-7: Two records have been committed and removed from cache,
and two records remain in cache
When you pass a value of 0 to AMaxErrors, you are signaling no tolerance for
errors. Under this condition, if even one error is encountered, FireDAC
Chapter 16: Using Cached Updates 461
terminates the update process immediately, the cache is restored to its pre-
ApplyUpdates state (all inserts, deletes, and modifications remain intact), and no
updates are applied to the underlying database. Assuming that
FDCachedUpdatesErrors looked like it did in Figure 16-6, with the exception
that MaxErrors was set to 0, clicking the apply updates button would produce
the result shown in Figure 16-8.
its pre-ApplyUpdates state, and no updates will have been written to the
underlying database.
The successful application of records when no more than MaxErrors are
encountered is demonstrated in Figure 16-9. Here we set MaxErrors to 5. Since
there are only two errors, the successful records are applied and failed updates
remain in cache.
Figure 16-9: Since MaxErrors was not exceeded, the successful records
were applied and removed from cache. The failed records remain in cache
Figure 16-10: MaxErrors was exceeded, MaxErrors plus one error were
reported, and the cache remains unchanged
In most cases, you only need to concern yourself with arInsert, arUpdate, and
arDelete requests.
The AAction parameter permits you to inform FireDAC about your action,
which it might use to abort the update process. AAction of a TFDErrorAction
type, whose declaration is shown here:
Action Description
eaFail Marks the update as failed and returns an error.
Writing OnUpdateRecord event handlers can be tricky, as you are taking full
responsibility for performing the updates. This can be seen in the following
event handler, which can be found in the FDCacheUpdatesErrors project:
begin
AAction := eaFail;
FDQuery := TFDQuery.Create( nil );
try
FDQuery.Connection := EmployeeConnection;
case ARequest of
arInsert:
begin
FDQuery.SQL.Text := 'INSERT INTO TwoFieldsTable ' +
' ("FIRST", "LAST")' +
' VALUES ( :f, :l );';
FDQuery.Params[0].AsString :=
ASender.FieldByName('FIRST').CurValue;
FDQuery.Params[1].AsString :=
ASender.FieldByName('LAST').CurValue;
FDQuery.ExecSQL;
AAction := eaApplied;
end;
arUpdate:
begin
FDQuery.SQL.Text := 'UPDATE TwoFieldsTable ' +
' SET "FIRST" = :cf, ' +
' "LAST" = :cl ' +
' WHERE "FIRST" = :of ' +
' AND "LAST" = :ol;';
FDQuery.Params[0].AsString :=
ASender.FieldByName('FIRST').CurValue;
FDQuery.Params[1].AsString :=
ASender.FieldByName('LAST').CurValue;
FDQuery.Params[2].AsString :=
ASender.FieldByName('FIRST').OldValue;
FDQuery.Params[3].AsString :=
ASender.FieldByName('LAST').OldValue;
FDQuery.ExecSQL;
AAction := eaApplied;
end;
arDelete:
begin
FDQuery.SQL.Text := 'DELETE FROM TwoFieldsTable ' +
' WHERE "FIRST" = :f ' +
' AND "LAST" = :l;';
FDQuery.Params[0].AsString :=
ASender.FieldByName('FIRST').OldValue;
FDQuery.Params[1].AsString :=
Chapter 16: Using Cached Updates 467
ASender.FieldByName('LAST').OldValue;
FDQuery.ExecSQL;
AAction := eaApplied;
end;
else
begin
AAction := eaDefault;
end;
end;
finally
FDQuery.Free;
end;
end;
There is, however, one approach that can significantly reduce the complexity of
an OnUpdateRecord event handler, which is to employ an FDUpdateSQL
component to perform your update. An FDUpdateSQL component can perform
the parameter binding for you, as well as assist in generating the parameterized
INSERT, UPDATE, and DELETE queries at design time. For more information
on the FDUpdateSQL component, see Chapter 5, More Data Access.
ApplyUpdates. Your options are to fix or discard any errors that were
encountered.
To fix the errors, you can permit the user to make the fix, or you can do it
programmatically. In the case where you are using cached updates and there is
no user interface, programmatically fixing the errors is your only option.
For those errors that you cannot fix, or do not want to fix, you can discard those
errors. This can be done by reverting the record to its original state (inserted
records will be deleted) using the RevertRecord method, or by canceling the
cached updates entirely using the CancelUpdates method. Recall, however, that
it is completely possible that ApplyUpdates gets aborted when AMaxErrors is
exceeded. This means that some records that are perfectly fine may remain in
the cache, in which case, canceling all updates using CancelUpdates will discard
those records as well.
Fortunately, there is an easy way to determine which records in cache caused a
problem, and to be able to actually examine the exception that was raised when
the updated failed. When a FireDAC dataset is in the cached updates mode, each
record in that dataset has an object attached to it, and this object can be
referenced using the RowError property of the dataset. This property returns the
object associated with the current record, and that object will either be nil, or
will be the exception that was raised when FireDAC attempted to post that
change.
One use of the RowError property is demonstrated in the OnDataChange event
handler of the data source that points to the TwoFieldsTable FDQuery. This
event handler will display any errors that were encountered in the available
status bar as you navigate through the cache that appears in the upper DBGrid of
the FDCachedUpdatesErrors project, as shown in Figure 16-11:
Figure 16-11: The value of the RowError exception is shown for the first
record in cache following a failed attempt to insert the record
The first parameter, ASender, is the FireDAC dataset whose update was being
applied. This parameter is similar to the OnUpdateRecord ASender parameter,
in that the current record of this dataset is associated with the record whose
update has failed.
AException, the second parameter, is the exception that was raised as a result of
the failure.
ARow is a TFDDatSRow instance. It represents the current row of ASender, but
it has a different set of methods.
ARequest is a TFDUpdateRequest reference, and it identifies the type of update
that FireDAC (or your OnUpdateRecord event handler) was trying to apply
when the update failed. TFDUpdateRequest was described in some detail in the
section, ApplyUpdates and OnUpdateRecord, earlier in this chapter.
Finally, AAction is a TFDErrorAction value passed by reference. By default, its
value will be eaDefault, which will equate to a value of eaFail if you do not
change it. If you determine that you can fix the error, you can read the OldValue
and NewValue values of the fields of the current record, and then assign data to
NewValue if you can fix the data. When you do fix the data, you should assign
AAction the value of eaRetry, and that will cause FireDAC to either try again,
or to invoke your OnUpdateRecord event handler again.
Unfortunately, there is a bug that may prevent you from using OnUpdateRecord
successfully. Specifically, when UpdateOptions.UpdateChangedFields is set to
True, and you correct an error from OnUpdateError and set AAction to eaRetry,
OnUpdateRecord will attempt to update only the fields that you fixed, and that
may not include fields that were edited during cached updates. This will either
cause another error to be generated, or will post an incomplete record to the
underlying table.
To work around this error, you can employ an FDUpdateSQL component and
define the insert, update, or delete queries yourself, including the error
Chapter 16: Using Cached Updates 471
As you can see, when you select to use OnUpdateError, an event handler is
assigned to the OnUpdateError property and
UpdateOptions.UpdateChangedFields is set to False. The following is the code
on OnUpdateError event handler:
Note: I reported this problem, where the record remains in cache with an
update status of usInserted, using the Embarcadero Quality Portal. You may
want to monitor the issue I created to see if and when the problem is resolved.
The issue number is RSP-17785.
DataSet is the FireDAC dataset from which the record is being applied. The
current record is the one associated with the record in cache. As with the
474 Delphi in Depth: FireDAC
Value Description
raAbort Aborts the call to Reconcile. No further update failures will be
processed.
raCancel Cancels the updates to the current record. Inserted records are
deleted.
raCorrect Clears the current record error state. In other words, it marks the
record as correctly applied.
raMerge Clears the current record error state, so the changes to the record
become the initial state of this record. In other words, it merges
changes to the dataset’s cache.
raRefresh Reverts the record to its original state and re-reads the record
from the underlying database. Should not be used from within
cached updates.
Here is the event handler that is called when the radio group box is set to use the
OnReconcileEvent handler:
Chapter 16: Using Cached Updates 475
procedure TForm1.TwoFieldsTableReconcileError(
DataSet: TFDDataSet;
E: EFDException; UpdateKind: TFDDatSRowState;
var Action: TFDDAptReconcileAction);
begin
if (not DataSet.Fields[0].IsNull) and
(not DataSet.Fields[1].IsNull) then
Action := raSkip
else
if DataSet.Fields[0].IsNull and DataSet.Fields[1].IsNull then
Action := raSkip
else
if DataSet.Fields[0].IsNull then
begin
DataSet.Edit;
DataSet.Fields[0].Value := DataSet.Fields[1].Value;
DataSet.Post;
Action := raCorrect;
end
else
if DataSet.Fields[1].IsNull then
begin
DataSet.Edit;
DataSet.Fields[1].Value := DataSet.Fields[0].Value;
DataSet.Post;
Action := raCorrect;
end
else
Action := raSkip;
end;
Figure 16-13: The OnReconcileError event handler has fixed the record,
but ApplyUpdates still needs to be called again
I then clicked the button labeled Call ApplyUpdates with MaxErrors again, and
this time, the corrected record was inserted into the underlying table and
removed from the cache, as shown in Figure 16-14.
Chapter 16: Using Cached Updates 477
Figure 16-14: ApplyUpdates has been called again, and the update was
applied and removed from the cache
dataset has changes. Similarly, ChangeCount can be used both on the schema
adapter and individual datasets, where the former counts all changes, and the
latter tells you only about the changes in the one dataset.
The centralized cached updates model is useful any time you want to manage
the cache of one or more tables through a single touch point, the schema
adapter. However, there are two scenarios where the centralized cached updates
model is particularly useful. The first is when you want to ensure that the
updates are applied in the same order in which the individual records
participating in the cached updates session were edited, inserted, and deleted. In
the decentralized model, the order in which the updates are applied is not
guaranteed.
The second scenario, and one that is somewhat related to the first, is when two
or more datasets participating in the cached updates session are related, either in
a one-to-one association, or in a master-detail relationship. This second scenario
is related to the first in that it is necessary that master table records get inserted
before detail records, and that detail records get deleted before their associated
master record gets deleted. The centralized model also supports cascading
deletes, in that, when properly configured, the deletion of a master table record
will automatically result in the deletion of associated detail records, thereby
preventing the accidental orphaning of the detail records.
The use of the centralized model of cached updates is demonstrated in the
FDCachedMasterDetail project whose main form is shown in Delphi’s designer
in Figure 16-15. This project is based on the range-based example provided in
the FDMasterDetail project, which was discussed in Chapter 9, Filtering Data.
Importantly, in order for the centralized model of cached updates to support
master-detail relationships, it is necessary that the master-detail relationship be
defined using the dynamic range-based technique. For more information about
the dynamic range-based technique for defining master-detail relationships,
please refer to Chapter 9.
Chapter 16: Using Cached Updates 479
The three buttons along the top of this form are used to apply updates, undo the
last change, and to cancel all updates in cache. These buttons, which are initially
disabled, are enabled once there are changes in cache. This is done from the
OnDataChange event handler that is used by both of the data sources associated
with the grids on this form. This event handler also displays information about
any changes found in cache in the status bar. The assignment of change
information to the status bar demonstrates using the ChangeCount property of
both the schema adapter and the associated datasets. This event handler, and the
ToggleButtons custom method that it calls, are shown here:
ToggleButtons( FDSchemaAdapter1.UpdatesPending );
if FDSchemaAdapter1.UpdatesPending then
StatusBar1.SimpleText :=
'There are a total of ' +
FDSchemaAdapter1.ChangeCount.ToString +
' updates in cache. The Customer table has ' +
customerQueryRB.ChangeCount.ToString +
' changes, and the Sales table has ' +
SalesQueryRB.ChangeCount.ToString
else
StatusBar1.SimpleText := 'There are no changes in cache';
end;
else
StatusBar1.SimpleText := NumChanges.ToString +
' were posted';
except
SharedDMVcl.FDConnection.Rollback;
end;
OnDataChange( nil, nil );
end;
Figure 16-16: There are three sales records for customer Dallas
Technologies
Chapter 16: Using Cached Updates 483
In Figure 16-17, the record for customer Dallas Technologies has been deleted.
The OnDataChange method has now updated the enabled property of the top
three buttons, and has displayed information about the number of changes in
cache in the status bar.
Figure 16-17: Deletion of the master record has cascaded, resulting in the
deletion of the three detail records
The insertion of detail records may also require some configuration. In the case
of the Customer/Sales relationship, it is necessary for the CUST_NO field in
Customer to match the CUST_NO field in Sales. This is somewhat complicated
by the fact that the CUST_NO field of the Customer table is an auto-increment
field. And in the InterBase database, this is performed by a database entity
called a generator. Other databases use other mechanisms. For example, auto-
increment fields are supported in Microsoft SQL Server through identity fields.
In the case of InterBase, two UpdateOptions properties needed to be configured
for the CustomerQueryRB FDQuery in order for it to properly generate the new
484 Delphi in Depth: FireDAC
CUST_NO field value using the generator. These properties are GeneratorName
and AutoIncFields, and they are shown in Figure 16-18, where GeneratorName
is set to CUST_NO_GEN, and AutoIncFields is set to CUST_NO.
Figure 16-19 shows how a newly inserted record is given a temporary key field
value, which is -1 in this case. This value is used to associate the newly created
customer table record with the detail sales record.
Chapter 16: Using Cached Updates 485
Figure 16-19: FireDAC has created a temporary key field value, -1, for the
CUST_NO field
Once the values in cache have been posted, the temporary value is replaced by
the value created by the generator. As you can see in Figure 16-20, this value is
1016.
486 Delphi in Depth: FireDAC
Figure 16-20: The value created by the generator replaced the temporary
key
Chapter 17
Understanding Local
SQL
I've saved one of the more interesting features for last, and that is local SQL.
Local SQL permits you to execute SQL statements against any dataset. For
example, you can perform a query against an FDTable to gather simple
aggregate statistics like SUM and AVG from the data it contains. Similarly, you
can query an FDQuery and perform a left outer join to an FDStoredProc
component (in which case, the stored procedure must return a result set).
Importantly, this ability to query datasets is not limited to FireDAC datasets. As
a result, there is nothing to prevent you from performing a query that performs a
join between an IBQuery, an SQLDataSet, and a ClientDataSet.
FireDAC performs this SQL slight-of-hand by converting SQL statements into
TDataSet calls. For example, an SQL INSERT query is implemented by
converting the SQL into calls to TDataSet.Post and TDataSet.Append.
Similarly, WHERE clauses are implemented through TDataSet.SetRange and
TDataSet.Filter. This is the reason that Local SQL supports any TDataSet
implementation, including those provided by third-party providers, such as
Direct Oracle Access and UniDAC.
It's a very clever solution, and one that enables a whole range of interesting
data-related operations that would otherwise be difficult or impossible to
implement. For example, even if you are already using another data-access
framework, such as dbExpress, or even a third-party data access framework, you
can use FireDAC to perform SELECT queries against those datasets, and then
use FireDAC's advanced features on the returned result set.
The specific dialog of SQL supported by Local SQL is based on SQLite, an
open source and cross-platform SQL engine. In fact, in order to use Local SQL,
you must have a connection that employs the FireDAC SQLite driver.
488 Delphi in Depth: FireDAC
Note: If you are using the original release of Delphi XE5 without updates, you
might be missing two files that are critical for working with SQLite. If you
encounter a compiler error indicating that one of these files is missing when
building an application using the FireDAC SQLite driver, install the latest
update for Delphi XE5.
Figure 17-1: The initial design for the FDLocalSQL project main form
c:\FireDAC\Shared Files
6. Return to the main form. From the Tool Palette, place one
FDConnection, three FDQueries, one FDLocalSQL, and one
ClientDataSet on the form. If you are using an older version of FireDAC,
you might also have to add an FDPhysSQLiteDriverLink component to
the form as well.
7. To simplify many of the steps later in this segment, give meaningful
names to most of your components using Table 17-1 as your guide.
Chapter 17: Understanding Local SQL 491
FDQuery1 LocalQuery
DataSource1 LocalQueryDataSource
FDQuery2 SalesQuery
FDQuery3 CustomerQuery
ClientDataSet1 EmployeeCDS
Table 17-1: New names for the components on the main form
When you are done your form should look something like the one shown in
Figure 17-3.
Figure 17-3: Data access components have been placed on the form and
provided with meaningful names
492 Delphi in Depth: FireDAC
Figure 17-4: FireDAC has confirmed that it can access the SQLite engine
3. Click Execute to test this query. Your screen should look something like
that shown in Figure 17-5. Click OK to close the FireDAC Query Editor
and save the query in SalesQuery.
Figure 17-5: The SELECT * FROM Sales query returns all records and
fields from the Sales table of employee.gdb
c:\users\public\documents\embarcadero\studio\19.0\
samples\data\customer.xml
Configuring FDLocalSQL
All the prerequisite requirements for using Local SQL are now set. The
remaining steps are necessary to configure the FDLocalSQL component, as well
as the FDQuery that will actually execute the SQL against your datasets.
1. Select FDLocalSQL1. Its Connection property should already be
assigned to SQLiteConnection. If not, set Connection to
SQLiteConnection now.
2. Next, select the DataSets property of the FDLocalSQL and click the
ellipsis button to display the DataSets collection editor. Click the Add
New button on the DataSets collection editor three times to add three
new datasets to the collection.
3. Select the first dataset in the collection editor and use the Object
Inspector to set the DataSet property to SalesQuery and its Name
property to Sales. Next, select the next dataset in the collection and set
its DataSet property to CustomerQuery and its name property to
Customer. Finally, select the last dataset and set its DataSet property to
EmployeeCDS and its Name property to Employee. Your DataSets
collection editor should now look something like that shown in the
following illustration.
496 Delphi in Depth: FireDAC
4. Close the DataSets collection editor. The last step we need to perform on
the FDLocalSQL component is to set its Active property to True. This is
essential. If you do not take this step, any queries against the SQLite
connection will fail.
5. We are finally ready to create our local query. Double-click the
FDQuery named LocalQuery to display its FireDAC Query Editor. The
query that we need can make use of most of the syntax supported by the
SQLite engine, with very few exceptions. For example, data definition
language (DDL) SQL statements such as ALTER and DROP are not
supported. However, most of the SQL statements that you are likely to
use are supported, such as SELECT, INSERT, and DELETE.
6. Enter the following SQL statement into the FireDAC Query Editor.
Notice that in this statement we used the names that appear in the
FDLocalSQL's DataSets collection editor to refer to the individual
datasets that we are querying:
7. Test your query by clicking the Execute button. Your Query Editor
should look something like that shown in Figure 17-6.
Chapter 17: Understanding Local SQL 497
Figure 17-6: Testing a Local SQL query in the FireDAC Query Editor
Figure 17-7: A DBGrid displays the result set created by a local query that
joins two FDQueries with a ClientDataSet
In the preceding steps, we added the datasets against which we were going to
execute SQL statements to the FDLocalSQL’s DataSets property. There are two
alternative ways of assigning datasets to the FDLocalSQL.DataSets property.
Both of these alternatives actually result in the dataset being added to the
DataSets property, but do so indirectly.
The first alternative is available only when your datasets are FireDAC datasets.
In those cases, you can use their LocalSQL property to assign them to the
FDLocalSQL.DataSets property. In these cases, the name of the FireDAC
dataset component can be used in the SQL query. For example, if instead of
adding the SalesQuery and CustomerQuery FDQueries to the DataSets
collection of the LocalSQL, if we had set the LocalSQL property of these two
FDQueries to FDLocalSQL1, we could have modified the SQL associated with
the LocalQuery component to look like the following:
Code: The FDLocalSQL project can be found in the code download. For more
information, see Appendix A.
Initially, all of the data access components are not active. Activation of
SalesQuery, CustomerQuery, and EmployeeCDS occurs when the form is first
created. This is also when the combobox is loaded with customer names, and the
FDLocalSQL component is made active.
Because this project wants to expose some flexibility in the local query, the
LocalQuery component’s query includes both a parameter as well as a macro.
This can be seen in the following code, which is the form’s OnCreate event
handler:
The last line of the preceding code makes a call to the OnChange event handler
of the checkbox named cbxUseOnGetDataSet. This event handler, shown here,
toggles between the code using the DataSets property of the FDLocalSQL
component to identify the datasets named in the query, and the OnGetDataSet
event handler:
Note: If you are running this project in the debugger, and select the checkbox to
use OnGetDataSet, you will get a series of exceptions when you click the
Execute Query button. Specifically, there will be one exception for each call to
OnGetDataSet. This is an internal exception, and does not reflect an actual
error. At runtime those exceptions will be suppressed internally, as well as when
you run the project without debugging.
There is only one more event handler in this project, and that is the one
associated with the button labeled Execute Query. From within this event
handler, a value is assigned to the macro depending on what the user selected in
the combobox, and the parameter is also assigned a value. Macro substitution
was used here because the macro is replaced with either an empty string or a
complex SQL segment. This replacement cannot be performed using a simple
parameter. (For information on macro substitution, please see Chapter 14, The
SQL Command Preprocessor.) This event handler is shown here:
end;
Figure 17-9 shows how it appears when it is run and the Execute Query button
is pressed without restricting records.
Figure 17-9: The Execute Query button is pressed to run the query without
restricting records
After having selected to display only records associated with 3D-Pad Corp. in
the combobox, and records with a total value of over $1000, pressing the
Execute Query button produces the result set shown in Figure 17-10.
504 Delphi in Depth: FireDAC
There are other ways to produce the effects that we saw here. We could have
used parameterized queries against a database that included all tables of
interests, and we could have also used filtering to achieve some of these effects,
but there are advantages to this Local SQL approach. First, once the data is
loaded, we could have done all sorts of interesting things with the data without
any further roundtrips to the server.
Second, we could have gotten very creative with the SQL, producing results
from code at runtime that would otherwise be impossible. For example, our
queries could have involved complex calculations using FireDAC scalar
functions that would have otherwise needed to be anticipated in advance with
calculated fields.
A third advantage is that Local SQL permits you to execute heterogeneous SQL
queries across two or more physical databases. Local SQL is the only way that
you can do this.
These errors usually indicate that the names that are assigned to the datasets in
the DataSets property are invalid.
These errors are frustrating, and can be difficult to correct. In fact, I've had
situations where I have had to remove the offending FDLocalSQL component
and replace it with a new one, which I then have to configure over again.
However, this is the only way that I have managed to correct these errors in
some cases. My recommendation is that if you run into issues like these, that
you simply replace the offending component rather than fight it. You may likely
save yourself some significant time.
Finally, I want to mention that this chapter has not been an exhausted survey of
Local SQL. It is intended to get you started, but there are properties and
methods that have not been discussed. If you find Local SQL to your liking, you
may want to refer to the online help to explore additional capabilities available
to you.
Appendix A Code Download, Database Preparation, and Errata 507
Appendix A
Code Download,
Database Preparation,
and Errata
Many code samples have been shown throughout this book, and the source code
for these projects is available online for you to download. Most of these
examples make use of databases and files that ship with Delphi or InterBase,
and you may need to make some modifications to Delphi or one of the sample
files before you can successfully run these projects. This appendix discusses
how to download and install the code samples, as well as how to enable data
access.
The final section in this appendix describes where you can find updated
information about this book, in case errors or issues are discovered after its
publication.
Code Download
The code samples associated with this book are found in a zip file that can be
downloaded from the following URL:
http://www.JensenDataSystems.com/firedacbook/firedacbookcode.zip
After you download this file you should unzip it into a single directory. When
you are done that directory will contains approximately 40 folders.
Many of the projects use a data module and unit from the Shared Files folder.
This folder is always assumed to be in a directory parallel to the folder in which
the project that uses these files resides. For example, the FDSaveAndLoad
project uses files in the Shared Files folder. The FDSaveAndLoad folder and the
Shared Files folder should be in the same subdirectory. For example, if you
508 Delphi in Depth: FireDAC
unzipped the code samples zip file into a folder named FireDAC on your C
drive, the FDSaveAndLoad folder will be located in the following directory:
c:\FireDAC\FDSaveAndLoad
c:\FireDAC\Shared Files
Likewise, some of the projects in the code samples use a saved FDMemTable
named BigMemTable.xml, and that file can be found in the BigMemTable
folder. Just as it is with the Shared Files folder, the BigMemTable folder must
be in a directory parallel to the folder in which the project that uses
BigMemTable.xml is located.
Note: The BigMemTable.xml file contains fictional data for use in testing
FDMemTable performance, and is copyrighted by Jensen Data Systems, Inc.
For more information, consult the file named About BigMemTableXml.txt,
which is located in the same folder as BigMemTable.xml.
These projects are specifically designed for Delphi or RAD Studio XE6 and
higher. While you can compile many of them in Delphi XE5 (the first version of
FireDAC that employed the FD prefix for component names), there are a lot of
projects that require units that were not available in Delphi XE5, and therefore
will not compile. These units include FireDAC.Stan.ExprFuncs,
FireDAC.Stan.StorageBin, FireDAC.Stan.StorageXML, and
FireDAC.Stan.StorageJSON.
In addition, you will need to use the Professional version of Delphi, or higher.
These versions of Delphi include the FireDAC InterBase driver, which is used
by a large number of the sample projects.
Even though all of these projects will compile in Delphi XE6, my
recommendation is that you use one of the latest versions of Delphi, certainly no
earlier than Delphi 10 Seattle. A number of important features were introduced
in Delphi 10 Seattle. For example, if you want to employ cached updates, I
recommend that you use no version earlier than Delphi 10 Seattle, as this is the
version in which UpdateOptions.AutoCommitUpdates was introduced. You can
find a detailed discussion of the importance of this property in Chapter 16,
Using Cached Updates.
Appendix A Code Download, Database Preparation, and Errata 509
Database Preparation
Many of the sample projects employ one of several InterBase databases that ship
with Delphi. This will require that you have installed the InterBase server (the
free InterBase Developer Edition is sufficient) before you can run these projects.
Installation
InterBase must be installed and started before you can successfully use these
projects. If you have only just installed RAD Studio and InterBase, and have not
yet re-booted your computer, InterBase is probably not running. Before you can
use InterBase, you will need to either re-boot your computer, or launch the
Services applet from the Administrative Tools section of the Control Panel, and
then start the InterBase service (or the InterBase Guardian service).
If these services are not configured to start automatically, you may want to
consider changing them to start automatically. This can be done within the
services applet by right-clicking the InterBase Guardian service and settings its
Startup Type property to Automatic.
In some versions of Delphi, InterBase will need to be started manually each time
you boot your computer. If you get an error the first time you try to run one of
the sample projects after re-booting, open the Services applet and re-start the
InterBase Guardian service. I’ve found this step necessary on occasion.
The DataPaths Unit
Many of the sample projects in the code download rely on sample databases and
files that ship with Delphi and RAD Studio. Unfortunately, where these files are
installed on your computer depends on which version of Delphi you are using.
When I designed these sample projects, I specifically wanted to make it easy for
you to accommodate different versions of Delphi, and I accomplished this by
using a unit named DataPaths.pas. This unit, which is located in the Shared Files
folder of the code download, defines three constants that are used by the sample
projects to locate the needed data. In addition, this unit validates one of these
paths, and also assembles a TStringList that defines the connection parameters
for the most used database (employee.gdb).
The following is a partial listing of this unit (some of the conditional statements
have been omitted for brevity).
unit DataPaths;
510 Delphi in Depth: FireDAC
interface
type
EFireDACBookException = class( Exception );
var
ConnectionParams: TStrings;
const
{$IF CompilerVersion > 31.0} //Delphi 10.2 Tokyo and above
IBEmpPath = 'C:\Users\Public\Documents\Embarcadero\Studio\19.0\' +
'Samples\Data\employee.gdb';
IBDemoPath = 'C:\Users\Public\Documents\Embarcadero\Studio\19.0\' +
'Samples\Data\dbdemos.gdb';
SamplePath =
'C:\Users\Public\Documents\Embarcadero\Studio\19.0\Samples\Data';
{$ELSE}
IBEmpPath = 'C:\ProgramData\Embarcadero\InterBase\gds_db\' +
'examples\database\employee.gdb';
{$ENDIF}
{$IF CompilerVersion = 31.0} //Delphi 10.1 Berlin
IBDemoPath = 'C:\Users\Public\Documents\Embarcadero\Studio\18.0\' +
'Samples\Data\dbdemos.gdb';
SamplePath =
'C:\Users\Public\Documents\Embarcadero\Studio\18.0\Samples\Data';
{$ENDIF}
{$IF CompilerVersion = 30.0} //Delphi 10 Seattle
IBDemoPath = 'C:\Users\Public\Documents\Embarcadero\Studio\17.0\' +
'Samples\Data\dbdemos.gdb';
SamplePath =
'C:\Users\Public\Documents\Embarcadero\Studio\17.0\Samples\Data';
{$ENDIF}
// ... more definitions appear below here
var
BigMemTablePath: string;
implementation
uses System.IOUtils;
begin
if Path.EndsWith('.gdb') then
Found := TFile.Exists( Path )
else
Found := TDirectory.Exists( Path );
initialization
ConnectionParams := TStringList.Create;
DataPaths.ValidatePath( IBEmpPath );
ConnectionParams.Add('Database=localhost:' +
DataPaths.IBEmpPath);
ConnectionParams.Add('User_Name=sysdba');
ConnectionParams.Add('Password=masterkey');
ConnectionParams.Add('Protocol=TCPIP');
ConnectionParams.Add('DriverID=IB');
BigMemTablePath := ExtractFilePath( ParamStr( 0 ) );
BigMemTablePath := BigMemTablePath +
'..\BigMemTable\BigMemTable.Xml';
finalization
if Assigned( ConnectionParams ) then
ConnectionParams.Free;
end.
The three constants defined in this unit are IBEmpPath, IBDemoPath, and
SamplePath. IBEmpPath points to the employee.gdb database, which is used by
a majority of the sample projects. SamplePath points to the directory in which
are large number of sample files are found, including Paradox tables, InterBase
database files, and ClientDataSet files. IBDemoPath points to the demos.gdb
InterBase database in this same folder.
512 Delphi in Depth: FireDAC
If you cannot run some of these sample projects because your installation does
not match the paths defined in DataPaths, locate the correct folders and use this
information to update the constant definitions in the DataPaths unit.
Using SharedDMVcl
While the DataPaths unit defines the locations for the code samples, it is the
SharedDMVcl and SharedDMFmx units that define a FireDAC connection that
is used by many of the sample projects. These two units define data modules,
and you are welcome to use these units, in conjunction with DataPaths.pas, in
sample and test projects that you create as you work through the examples in
this book.
If you add SharedDMVcl and DataPaths to an existing project, there is one
additional step that you must take in order to successfully use the FDConnection
on these data modules. You must ensure that the data module is created before
creating any form that needs to use the connection.
The following steps demonstrate one of the ways that you can ensure that the
data module is created before the forms that need to use the FDConnection on
the data module:
1. After adding SharedDMVcl (or SharedDMFmx) and DataPaths to an
existing project, select Project | Options from Delphi’s main menu (or
press Ctrl-Shift-F11) to display the Project Options dialog box.
2. Select the Forms tab on the Project Options dialog box.
3. Move the data module to the top of the Available Forms list, as shown in
Figure A-1.
4. Click OK to save your updated project options.
Appendix A Code Download, Database Preparation, and Errata 513
If you cannot open the EMPLOYEE named connection in the Data Explorer,
you may have an invalid path you need to correct in the Database parameter of
this connection definition. (You should be able to open this connection using the
username sysdba and the password masterkey.) To correct the connection
configuration, right-click the EMPLOYEE named connection in the Data
Explorer and select Modify. Correct the path to the employee.gdb database in
the Database parameter of the FireDAC Connection Editor. See DataPaths.pas
for the correct location of this database for your version of Delphi.
Appendix A Code Download, Database Preparation, and Errata 515
c:\ProgramData\Application Data\
Embarcadero\InterBase\gds_db\examples
Note that the c:\ProgramData folder is a hidden folder by default. If you do not
see this folder from the Windows Explorer, you can type c:\ProgramData\ into
the address line of the Windows Explorer and press Enter, and then the folders
under that directory will become visible. You can also change your general
folder viewing options to show hidden folders, files, and drives, and then
ProgramData will always be visible from Windows Explorer.
If you need to execute the ib_udf.sql script, you can do so easily for the
employee.gdb database used in many of the projects in the code download by
using the following steps:
1. Using the Data Explorer, expand the FireDAC node, and then the
InterBase node to expose the named InterBase connections.
2. Right-click the EMPLOYEE node and select Modify to display the
FireDAC Connection Editor for this named connection.
3. Using Notepad, or some other file editor, open the ib_udf.sql script file
from the directory given earlier in this section.
4. Copy the contents of this script file to the Windows clipboard.
5. Select the SQL Script tab of the FireDAC Connection Editor, and paste
the contents of the script into the top pane, as shown in Figure A-3.
516 Delphi in Depth: FireDAC
Figure A-3: The ib_udf.sql script has been pasted into the SQL Script pane,
ready for execution
6. Click the green arrow in the displayed toolbar to execute the query.
7. You can now click the OK button to close the FireDAC Connection
Editor. Your FireDAC scalar functions should now work properly.
Appendix A Code Download, Database Preparation, and Errata 517
Errata
All of us who have worked on this book have tried to ensure that the
descriptions contained here are accurate. Nonetheless, there are bound to be
some errors that will come to light after this book has been published.
If substantive errors are discovered after this book has been published, we will
post corrections on the errata page associated with this book. This page can be
found at the following URL:
http://www.JensenDataSystems.com/firedacbook/errata
We suggest that you visit this page from time to time, in case a correction that
applies to the work that you are doing have been posted.
Index 519
Index
ApplyRange
—!— defined, 228
using, 228–31
! character in macros, 400, 401 Array DML, 437
array limits, 437
—$— databases supporting, 426
defined, 8, 425
$(name), 300
errors, and, 436
OnExecuteError event handler, 434
—&— Params.ArraySize property, 430
Array DML mode, 434
& character in macros, 400
database, supporting, 435
asynchronous queries, 9
—?— auto increment fields
? character in macros, 401 cached updates, and, 483
AutoCommitUpdates
importance of, 458
—A— automatic connection recovery, 12
ActiveStoredUsage, 132 AVG, 266
AfterClose, 149
AfterOpen, 149 —B—
aggregate fields. See aggregates
batch command processing. See Array
AggregateField
Active property, 269 DML
Visible property, 269 batch move, 13
BDE
aggregates, 293
converting to FireDAC, 6
AggregateFields versus, 261
BDE (Borland Database Engine)
Aggregates versus, 261
defined, 5
creating, 261–66, 272–74
creating at runtime, 277–79 BigMemTable.xml file, 176, 198, 221, 232
defined, 260, 261 BindNavigator, 92
BindSourceDB class, 92
descriptive statistics, 261
Bof (beginning-of-file), 154
expressions, defining, 266
Borland SQL Links for Windows
GroupingLevel, 268
IndexName, 268 defined, 6
performance and, 272 briefcase model, 11, 297, 439, 448, See
AggregatesActive, 269 also persisting data
brute force method, 453
Append, 162
AppendRecord, 162
520 Delphi in Depth: FireDAC