elemental_log
v1.0 - 26 November 2020
Compiled by Daniel Wood




Introduction

elemental_log is a utility to provide audit logging to any solution in a simple and intuitive way. 

The elemental_log file contains the core tools to manage logging within your files, along with the components you require to add to your own files, along with instructions on how to do so.

As a quick overview, elemental_log contains the following:

  • Quick start guide and documentation for setting up your own files
  • A populate tool for transferring record log data to elemental_logs audit log table.
  • A contextual view tool for viewing audit log data in your own solutions for whatever record(s) you are viewing.



An overview of how it works

elemental_log is intended to be a companion file that connects to your solution files and manages the processes associated with logging.  To help visualize how it works we have drawn this hopefully helpful diagram:


A script #elemental_log facilitates two way communication between files for carrying out functions such as configuration, preparing logs, populating the audit log with data, and viewing the audit log data.


We log in JSON first

A JSON based change log is maintained on every table you wish to log. This captures modifications made to the record in JSON format. It does this using the a custom function - @ELEMENTAL_LOG.  This function also asks elemental_log what fields it should be logging in whatever table it is being run from.

Below is an example of what this JSON log looks like for a typical record. It stores some information about the file and table being logged, as well as the primary key value for the record. The initial_values and last_values objects help us determine whether changes are made, and the changes themselves are kept in the log array.

{
  "data"
  {
    "information"
    {
      "base_table" : "Contacts",
      "file_name" : "Contacts_Demo",
      "primary_key" : "E13A29C6-FA04-472C-AFA6-9E60A87BD6DB"
    },
    "initial_values"
    {
      "email" : "amaclead@outlook.com",
      "name_family" : "Maclead",
      "name_given" : "Abel",
      "phone_home" : "631-335-3413",
      "phone_work" : "631-677-3675",
      "photograph" : "6h0HeYG_.jpg"
    },
      "last_values"
    {
      "email" : "amaclead@hotmail.com",
      "phone_home" : "631-335-3422"
    }
  },
  "log"
  [
    {
      "account" : "elemental_log",
      "change_state" : "onModify",
      "field_name" : "email",
      "id_session" : "",
      "layout_name" : "Contacts",
      "modified" : "2020-11-25 9:17:21 PM",
      "script_name" : "",
      "utc" : 63741889041037,
      "uuid" : "3131452287154726348103204583604141493936399793380326844074",
      "value" : "amaclead@hotmail.com"
    },
    {
      "account" : "elemental_log",
      "change_state" : "onModify",
      "field_name" : "phone_home",
      "id_session" : "",
      "layout_name" : "Contacts",
      "modified" : "2020-11-25 9:17:25 PM",
      "script_name" : "",
      "utc" : 63741889044397,
      "uuid" : "4851487553280602608269468881043215978192119124950054386803",
      "value" : "631-335-3422"
    }
  ]
}

As you can see, we capture quite a bit of useful information about every change made, more on this later.


JSON to FileMaker Records

JSON is nice, and somewhat readable when formatted, but this is not the end of the journey for this log data. We are unable in FileMaker to easily search or view JSON data in a single location.

For this reason, elemental_log assists us by providing 2 tools - Populate, and View.

Populate is the process of obtaining all changes in the JSON logs from files, and turning those changes into FileMaker records in a table - LogData - in elemental_log.

With changes stored in this table, there are 2 tools provided for searching and viewing this data - the Full Log Viewer, and the Contextual Log Viewer.




Before we begin..

In this section we’ll cover off a couple of things you should be aware of when embarking on your logging journey.


You should use one copy of elemental_log per solution

While elemental_log can track and facilitate logging in multiple files, for now all of those files should belong to a single solution. If you plan to use elemental_log in multiple solutions, you should make copies of it and rename each copy to coincide with the solutions you intend to use it with.

The other reason to do this is to keep audit log data separate from other solutions data.


Make sure your security and accounts are all good

For successful usage of elemental_log in a solution, it needs to execute a special script that is added into your own files.  In fact, there is a two way communication between elemental_log and your files:

  • Your files can initiate tools and scripts in elemental_log for carrying out various functions.
  • elemental_log communicates with your files for carrying out various functions.

For this reason you need to consider elemental_log as no different than any other file in your solution.  This might involve adding your own accounts (be it FileMaker or External) into elemental_log to ensure both files are able to communicate.

The demo privilege set in elemental_log (username/password is demo) is set with the correct minimal access required for an account to be able to carry out all functions in elemental_log. You could create an account (or use the demo account) as an account to use in elemental_log by setting it to auto login via File → File Options. 

Some tools (prepare, populate) can be automated with a Scheduled Script on FileMaker Server. If you intend to do this, then you need to ensure that the account you use in these schedules also exists in your own files, with the ability to run the elemental_log script.




Integrating into your own solution

It’s now time to integrate elemental_log into your own solution. As mentioned in the Core Stuff, elemental_log is a file that is included with your own solution, but it does require a few components be added into your own files.  In this section we’ll cover off what those components are, and why they’re important.  

All components required can be found in elemental_log.


Step One - Specify Field / Script Names

This step is not necessarily required, in fact most people will just skip this step. Included in the components you add to your own solutions are 2 fields and a script.

This is your opportunity to determine what the names of those fields & script will be named. The easiest thing is to go with the default but if you have your own naming conventions you may wish to specify what these names are now.

This can be done in elemental_log from the Quick Start Guide, located on the home screen.


Choose what you wish these names to be.  

NOTE: You will actually need to physically rename these fields and scripts. The names are used in the logging process which is why their names are specified here.


Step Two - Set up your own file(s)

The next step involves setting up your own files with some schema & components. This is broken down into the below steps (which you should follow in order):


1 - Create an External Data Source
In your file, add an external data source (File → Manage → External Data Sources). You can name this anything you like (though we recommend elemental_log).  You should point the source to the elemental_log file (or whatever you have renamed it to).

The external data source is required because we next need to add a table occurrence to the relationship graph.


2 - Add a Table Occurrence
The next step requires you to add a Table Occurrence to the relationship graph. This should point to the table LogFiles (and the name should be kept the same).



3 - Copy and Paste the @ELEMENTAL_LOG custom function
This custom function is responsible for maintaining the JSON based log on records. The log field references the custom function, as does a second field which is used determining if the log has changes in it (so that it can be turned into records in elemental_log).

The custom function also references the LogFiles table occurrence created in step 2.  This table contains the information about what tables and fields require logging in the current file - that’s how the log field knows what to do!


4 - Copy and Paste the fields found in the LogFields table.
The LogFields table in elemental_log is only there to contain the 2 fields you need to copy and paste into your own solution.

Paste these fields into any table you wish to log. You’ll only be able to configure logging for tables where these fields reside.


5 - Copy and paste the script #elemental_log
The #elemental_log script is how elemental_log communications with your files. It does a number of things such as:

  • Request information
  • Obtain log details for turning into records
  • Reset or prepare record logs
  • and more…

It’s an important part of the process. You just need to copy and paste - no further setup required. (Remember that if you chose in Step 1 to rename this script, you should make sure the script is renamed).



Step Three - Set up elemental_log

With Step Two complete, you are now ready to choose what fields you want elemental_log to monitor and track changes for.  Technically speaking elemental_log is already set and ready to go, but you use elemental_log to configure what fields to log.

This is done using the Configure tool. We’ll leave it up to that section to talk through how this tool works, you can follow the link above, or use the table of contents on the left of this article.




The Configure Tool

The configure tool is used to tell elemental_log about the files you wish to log, and which tables and fields to log. After completing the integration steps, you would run the configure tool.



Accessing the Configure Tool

Access to the tool is found within the home screen of elemental_log in the list of tools on the right hand side:


Alternatively you can launch the configure tool from your own files. Note that at current doing this will allow you to configure any files setup for logging, not just the file you launch it from.

To launch from your own files, call the script CONFIGURE: Open Configuration as shown:

Perform Script [ Specified: From list ; "CONFIG: Open Configuration" from file: "elemental_log" ; Parameter:  ]


Adding your file

With the configure tool open, your next step is to add your file that you have previously setup. To do this, use the plus icon in the files section on the left as shown.


You will be prompted to enter your file name.  Make sure the name you enter matches that of a file that has been set up and connected to elemental_log.  Both your file and elemental_log should reside on the same host, or in the same folder if local.



If the file you add requires that other files authorize in order to access it, then click yes to this dialog. If you do need to authorize a file, you may see the below dialog immediately following. This is simply because authorization has introduced a slight delay in resolving a file reference we use to communicate with your file. 


Fear not - the process still works and you simply need to refresh the table listing if you see this. You can do this by clicking the refresh icon shown, or by re-selecting the file from the files list.


Choosing what fields to log.

This is the fun part. Just select a table to view loggable fields. Then just click a field to toggle logging on or off - it’s that simple.


Logged fields will appear with a green tick. You can use the all/none toggle to quickly select everything, or clear all fields.



What can you log?

The list of fields that is available for selection has been pre-culled to remove any fields that cannot or should not be logged. Fields that are not able to be logged include:

  • Global fields
  • Summary fields
  • Unstored calculations
  • The log field and the log flag field.

NOTE: While stored calculations can technically be logged, we have decided to omit these as you can log the fields that they reference.

The list of tables that you can select for logging have also been culled to only include tables into which you have pasted the 2 required logging fields.


Defining the primary key field

It is important when configuring a table to define which field is the primary key field. Doing this will ensure that we can store the primary key value for a record within it’s JSON change-log.  We do this so that when JSON data is transformed into FileMaker records, we can store the primary key with every change to tie changes back to their originating records.

There are three ways to define the primary key:

Specify for individual tables
The first option allows you to nominate the primary key field in each individual table. To do this, click the key icon beside the primary key field. It will turn blue to indicate it is selected.


Specify for entire file
If every table in your file has the same field name for the primary key, you can save time by just entering the field name once into the provided Primary Key Name field found in the file settings area:


Be careful - ensure the name is correct and that you do indeed use this name for every table!


Specify a combination
You can actually use both the above options in conjunction. If certain tables have a primary key name, then you can explicitly select it for each table. For the rest, you can enter the name into the Primary Key Name field for them to use that instead. Table selected keys take precedence over the file-wide name.

If you have specified fields to be logged but have not either specified a file-wide name, or a table specific primary key, you will be shown a warning icon and message accordingly:




File Logging Preferences

We have included a couple of settings that you can define for files that you log:


Primary key we have already discussed, but we’ll cover off the others.


Enable Logging
A bit of a no brainer. Turning this setting off will prevent any logging from occurring across all tables and fields set to be logged within this file. 


Hidden File
Some files in a mutli-file solution are never intended for end users to see or interact with. An example is data files in an Interface/Data style solution.

If this file is not to be viewed by users, then check this option. Doing so will ensure that any action performed in the file via elemental_log will hide the file when done, keeping it hidden from users. For files intended for viewing, leave unchecked.


Conserve Space
As mentioned earlier, the change-log stored on each record is in JSON format.  JSON has the option of being formatted for readability, or left unformatted.

If you choose this option, then your JSON log data will be left unformatted. This is a good option if you have no intention of needing to view the log data, and would prefer to be viewing it after it is turned into records.  Doing so saves around 40% of space compared to the formatted version.

If however you want to look at the JSON and understand what you’re seeing, you might like to leave this option off.


Optimise Performance
This option will give you a small performance boost for the actual logging calculation. Part of this process requests information from elemental_log about what needs logging. For a single record this is an inexpensive request (5-10 milliseconds) however across a batch process like replacing thousands of records - the time can be measured in seconds.

By turning this option on, we cache the JSON data from elemental_log that tells us what to log. This JSON is called the log definition, and is stored in a global variable for referencing in future logging operations.

NOTE: A word of caution however is that because this is caching the log definition, any changes a developer makes to the log configuration will not take effect for users until they restart the solution.  Use this option if your logging is setup and is unlikely to change often, or if you are performing batch updates or replaces to data.


The Log Definition

The log definition is a block of JSON that is built for each file from the settings and fields specified in the configure tool.  You can view this JSON via the configure tool from the link provided.

Below is an example of a log definition:

{
  "meta"
  {
    "cache_definition" : 1,
    "file_enabled" : 1,
    "file_hidden" : 0,
    "name_field_log" : "_elemental_log",
    "name_field_log_flag" : "_elemental_log_flag",
    "name_field_primary_key" : "PrimaryKey",
    "name_script" : "#elemental_log",
    "remove_formatting" : 0
  },
  "table_list"
  {
    "Contacts"
    {
    "field_list"
    {
      "Company" : "1",
      "First Name" : "1",
      "Job Title" : "1",
      "Last Name" : "1",
      "Photo" : "1",
      "Title" : "1",
      "Website" : "1"
    },
    "primary_key" : "PrimaryKey"
    }
  }
}

It contains the settings and default field/script names in a meta object. Within the table_list object you’ll find a key or every table where logging is taking place, and within that you’ll find a list of the fields being logged, along with the name of the primary key field.

It is this JSON that each file will use to determine it’s logging requirements.

NOTE: We have allowed developers to directly modify the JSON if required (e.g. if tables/field names change, or you wish to quickly modify anything - we do however recommend using the configure tool).




The Prepare Tool

Preparation means readying the JSON change-log field (by default named _elemental_log) for logging.  This is done by setting into the log field an initial state, which contains information such as the initial values for all the fields that are being tracked.

The main reason why we do this is so that when the very first change is made to a tracked field, we can ascertain what it’s initial value was prior to the change. This helps us determine whether a change has actually taken place, and log accordingly.

Batch operations such as the Replace Field Contents script step, or Import Records script step don’t actually allow us to know which fields were changed, and so in order to determine this we must check logged fields current values to their previously known values to ascertain if a change has taken place - this is where keeping initial values comes in handy.


Do you need to prepare?

Well, that depends. The main reason to prepare logs is for the reasons mentioned above, and is only actually an issue for the very first set of changes made to a given record. After that, the log self-prepares. Further to this, the populate process also resets the log after it’s contents has been sent to elemental_log so preparation really is a first time thing.

If you aren’t concerned with knowing what the previous value was for the very first changes made after you integrate logging, then you don’t need to bother with preparation (although we think it is a good idea).

Preparing your logs also helps a little with performance. There is a small overhead involved in the preparation process compared to normal logging operations, and so if you do this up-front rather than on the very first change logged, you will save some time in the logging process (note that preparation does occur automatically on the first change made to a logged field, but by that time it’s too late to know the initial value for any changed field).


Layouts required!

Because preparation involves changing the contents of the log fields, at least one layout is required in your file that is based on the base table that you are attempting to prepare.  This can be any layout - elemental_log will pick the first one it comes across and use that - so long as you have one.  If you don’t have a layout for a table, then it won’t be able to prepare.


Accessing the Prepare Tool

Access to the tool is found within the home screen of elemental_log in the list of tools on the right hand side:


Alternatively you can launch the prepare tool from your own files.


To launch the prepare tool directly
You can open the prepare tool user interface from your own files. Simply call the script PREPARE: Open Prepare.  Note that when opening from your own file, you will see all files set up in elemental_log, not just the file you are opening it from.


To run directly from your own files.
To run prepare from your own files, call the script PREPARE: Run Prepare as shown:

Perform Script [ Specified: From list ; "PREPARE: Run Prepare" from file: "elemental_log" ; Parameter: $parameter ]

We have set the parameter to a variable $parameter purely to show there can be potentially multiple parameter formats to do different things. Below are the various options for parameter:


To prepare all files
If you are logging multiple files, you can prepare every file by only specifying a preparation type, either using JSON, or by just using the sting “prepare” as the parameter:

JSONSetElement
  ""
  [ "prepare_type" ; "prepare" ; JSONString ]
)


To prepare a single file only
This option is used to prepare every log field in an entire file:

JSONSetElement ( 
  ""
  [ "file_name" ; get ( filename ) ; JSONString ] ; 
  [ "prepare_type" ; "prepare" ; JSONString ]
)


To prepare a specific table in a file
This allows you to only prepare all records needing preparation in a single table

JSONSetElement
  ""
  [ "file_name" ; get ( filename ) ; JSONString ] ; 
  [ "table_name" ; "<<your base table name here>>" ; JSONString ] ;
  [ "prepare_type" ; "prepare" ; JSONString ]
)

This parameter requires you to specify what the base table name (not table occurrence name) is for the table you want to prepare.  NOTE: You need to specify the file name also otherwise it won’t know which file the table belongs to.


To specify a specific set of records in a given table
This allows you to nominate a specific set of records (by primary key values) to prepare in a given table.

JSONSetElement
  ""
  [ "file_name" ; get ( filename ) ; JSONString ] ; 
  [ "table_name" ; "<<your base table name here>>" ; JSONString ] ;
  [ "record_ids" ; "<<return delimited list of ids>>" ; JSONString ] ;
  [ "prepare_type" ; "prepare" ; JSONString ]
)


Preparation versus Reset

There are two modes you can run the preparation process in - prepare and reset defined by the prepare_type key you specify in the JSON parameter, or via the Prepare tool interface.

Prepare will only reset log entries that are completely empty (so have never been set before). Use this when first adding elemental_log to your solution, or if you have imported records with auto-enter options disabled.

Reset will reset log entries regardless of their contents. Use this if you want a clean slate of logs. NOTE:  This will override any existing log contents. If you wish to keep the log contents, make sure you run Populate first to transfer log data over to elemental_log.


Preparation via Server Script Schedule

There is an additional option which is to use a script schedule to run your preparation or reset. This would be useful if you wish to run this process outside of work hours overnight, or if you have a large number of tables and records needing preparation.

To do this, you can use the same script as shown above, but to avoid the hassle of using JSON, you can use the script parameter prepare or reset just as a single text string.  Note that currently this method of operation is only supported for preparing or resetting every file you have configured to log.


Using the Prepare Tool

The prepare tool interface is really simple to use - simply select the file you wish to prepare or reset. When selected, you can choose to either prepare or reset the entire file (all logged tables), or just a specific table within your file.  Tables that have no fields specified for logging will be greyed out.

If your file is hosted, then this operation will first attempt to be run server-side via a Perform Script on Server step.  If this step is not available (e.g running locally) then it will perform it client-side. 

Note that this operation can take some time depending on the number of records being prepared or reset, hence why we try to do it on server if we can.



The Populate Tool

After a period of successful logging activity on your records, you may find that the size of your JSON change-log field begins to grow. Each time a change to a tracked field is made, an entry is recorded in an array within the JSON. The more time that passes, the larger this becomes. 

While this may not be an issue for a lot of people, there are two important things to consider:

  • A larger log means a larger record. A larger record means it will take longer for users to download the record when viewed, as well as increase the size of your solution.
  • A larger log can result in slower processing times for the logging function when there becomes hundreds or thousands of entries (depending on the size of the field being logged).

For this reason, we have included a Populate tool. The idea behind this tool is to pull all changes out of your files JSON change-logs, and turn those into records within a table in elemental_log.

By having these changes stored in records, we can easily search and view them - something much more difficult in JSON.  elemental_log has two viewing tools built in to allow you to search and view.



Layouts required!

Because population involves obtaining the contents of the log fields via search followed by resetting them, at least one layout is required in your file that is based on the base table that you are attempting to populate from.  This can be any layout - elemental_log will pick the first one it comes across and use that - so long as you have one.  If you don’t have a layout for a table, then it won’t be able to populate.

Accessing the Populate Tool

Access to the tool is found within the home screen of elemental_log in the list of tools on the right hand side:


Alternatively you can launch the populate tool from your own files.


To launch the populate tool directly
You can open the populate tool user interface from your own files. Simply call the script POPULATE: Open Populate.  Note that when opening from your own file, you will see all files set up in elemental_log, not just the file you are opening it from.


To run directly from your own files.
To run populate from your own files, call the script POPULATE: Run Populate as shown:

Perform Script [ Specified: From list ; "POPULATE: Run Populate" from file: "elemental_log" ; Parameter: $parameter ]

We have set the parameter to a variable $parameter purely to show there can be potentially multiple parameter formats to do different things. Below are the various options for parameter:


To populate from all files
If you are logging multiple files, you can populate the audit log from every file specifying no parameter.


To populate from a single file only
This option is used to populate from log fields in a specified file.

JSONSetElement ( 
  ""
  [ "file_name" ; get ( filename ) ; JSONString ]
)


To populate from a specific table in a file
This allows you to only populate from records in a specific table in a specified file.

JSONSetElement
  ""
  [ "file_name" ; get ( filename ) ; JSONString ] ; 
  [ "table_name" ; "<<your base table name here>>" ; JSONString ]
)

This parameter requires you to specify what the base table name (not table occurrence name) is for the table you want to prepare.  NOTE: You need to specify the file name also otherwise it won’t know which file the table belongs to.


To populate from a specific set of records in a given table
This allows you to nominate a specific set of records (by primary key values) to populate from in a given table.

JSONSetElement
  ""
  [ "file_name" ; get ( filename ) ; JSONString ] ; 
  [ "table_name" ; "<<your base table name here>>" ; JSONString ] ;
  [ "record_ids" ; "<<return delimited list of ids>>" ; JSONString ]
)


To populate from a set of records prior to deletion.
This is a special case where you may wish to delete one or more records in your database. In this situation, when records are physically deleted, their JSON change-log contents are also deleted. Before this happens you might wish to transfer the log contents over to elemental_log via a populate action. Once the log data is safely transferred you can delete your records.

To do this, you use the same parameters you would use when populating a specified set of records, but with an additional parameter to tell the process that you are deleting records:

JSONSetElement
  ""
  [ "file_name" ; get ( filename ) ; JSONString ] ; 
  [ "table_name" ; "<<your base table name here>>" ; JSONString ] ;
  [ "record_ids" ; "<<return delimited list of ids>>" ; JSONString ] ;
  [ "delete" ; 1 ; JSONNumber ]
)

By specifying a delete action, the populate process will attempt to run on server, but the populate process will wait until completed before continuing. This is so that you can be sure log contents are safely transferred before you do your delete. If the file is local, then it will run locally.


Using the Populate Tool

The populate tool interface works exactly the same as the prepare tool. You just choose the file you wish to work with to view options.  You can either populate an entire file, or just a specified table. Tables that have no fields specified for logging will be greyed out.

If your file is hosted, then this operation will first attempt to be run server-side via a Perform Script on Server step.  If this step is not available (e.g running locally) then it will perform it client-side. 

Note that this operation can take some time depending on the number of records being populated from, hence why we try to do it on server if we can.




The Full View Tool

The full view tool allows you to search and view all records that have been created from JSON change-log data on records. This tool is built with FileMaker in mind and is intended to be used with find mode. We use a master-detail view to present the information, which you can also select to view the full information about the change, and a history of changes made to the chosen field for a given record.

NOTE: This tool is really intended to be a developer only tool as it exposes change data for every file, table and record you are logging. You should consider using the contextual view tool if you want to restrict what you see, or building your own display through to the data from your own files.



Accessing the Full View Tool

Access to the tool is found within the home screen of elemental_log in the list of tools on the right hand side:



Alternatively if you do wish to open the full viewer from your own files, you can do so by running the VIEW_F: Open Full View script.  You must pass as a parameter the current file name using JSON as shown below:

JSONSetElement ( 
  ""
  [ "file_name" ; get ( filename ) ; JSONString ]
)

This is so the viewing tool can open in the correct position.


Viewing details about a change

The list view will show you some basic information about each change but we do capture more information than is displayed. To view all captured information just highlight a row.


When a row is selected, a panel will appear on the right showing you all information about the change, in order from the top:

  • Field name
  • File name
  • Table name
  • Primary key value
  • Old (previous) value
  • New (current) value
  • Date and time changed
  • User account that made the change
  • Session ID (more on this here)
  • Batch number
  • Layout Name
  • Script Name
  • A history of all recorded changes to this field/record combination.

Some of these may need a little more explanation:

Session ID
A session ID can be a piece of information that identifies a given users session in a solution. How this is tracked depends entirely upon the solution it is integrated into, and it is up to the developer to add this ID into the tracking within the @ELEMENTAL_LOG custom function. This is useful because if your solution does track sessions, you can identify all changes a user made while logged in for any given session. 

Layout Name
This is actually not that complicated. We store the name of the layout that a user was on at the time the field was changed.

Script Name
If a field is changed by way of a script, such as using a Set Field, or Replace Field Contents script step, then we record what script was responsible for the change. This is useful if a field value has changed but you are unsure how that change was made.

Batch Number
The batch number is a UUID number that is assigned to all changes made within a single commit record action in a given table.  This is useful if you need to determine all the other fields that were changed at the time another field was changed in a table.

NOTE:  If related records are modified during the commit, they are assigned their own batch UUID number. If you need to identify related fields changed, you could instead search on changes made with the same timestamp value. We also store the UTC time in milliseconds and while we do not display this in browse mode, it is a field available in find mode.


Searching

Searching is simple, just use the Find button provided or go into find mode using the menus or shortcut keys.

We have a separately designed view for when you are in find mode, and expose all tracked fields for searching.


A note about previous value for a change

We endeavour to display the correct previous value for any given change, so that you know what the value changed from and to.  We achieve this in one of two ways.

Firstly, if a given entry in the log is the first entry in the log for a given file/table/field/record combination, then we store the initial (old) value on the record, along with the current (new) value.  In fact, we go one further and do this for the first log entry added to the log after any reset of the JSON change-log.  Resetting occurs after the JSON change-log is cleared of its contents and sent to elemental_log to be turned into records.

Capturing the old value in this way is done because we cannot guarantee we will know what the previous value is by using the second method below.

The second method involves simply looking at the previous log record for the given file/table/field/record combination. This is done by looking at the change directly preceding a record.

This method is done for most situations for performance and size considerations. While we could capture in the JSON change-log every previous and new value on every single change entry, this will greatly increase the size of the JSON log over time, and reduce performance the larger the log becomes.  For this reason we only store the new (current) value on entries in the JSON change-log.  The JSON change-log however does contain initial values for all fields logged, and we use these for the first logged entry only.

For more information on what the JSON change-log looks like, check here.




The Contextual View Tool

This is a special tool designed to just show you changes made to the records that you can see in your own file on the layout you are looking at.  While still intended to be a developer only tool, you can if you wish allow users access to the tool as a means to see changes (but this may not be desired for reasons explained below).



Requirements and considerations for using the contextual view tool

The contextual view tool is solely intended to be accessed from your own solution, and is able to display the full listing of related table occurrences that are tied to your current layout context.

It does this using the #elemental_log script, and so this script is required in any file that is used to access the viewer. Even if your file does not do logging, you must still have this script in order to use the viewer.

NOTE: Because every related table occurrence is displayed,  this is not an ideal tool to be used in conjunction with spider based graphs, or groups with many related table occurrences. While it can be, you may find usability to be difficult.  This tool works best for solutions designed using an anchor-buoy graph structure.


Accessing the contextual view tool

Access to the tool is done by calling the script VIEW_C: Open Contextual View script from your own files. You are required to pass through your file name as a parameter, in JSON format, as shown below:

JSONSetElement ( 
  ""
  [ "file_name" ; get ( filename ) ; JSONString ]
)

This is to the view tool can communicate with your own file to obtain a table occurrence group listing for your current context.

NOTE: This tool is unavailable from the home screen in elemental_log.


Viewing details about a change

When you open the contextual view tool, you will see a list of all related table occurrences on the left hand side, with your current contexts table occurrence selected. By default we show all changes related to the current record of your current layout context.

Some basic information is shown about the changes in the list view.  To view the full set of change information, simply select a row.

You can also select any of the other table occurrences to view their changes. When viewing related table occurrences, changes from all related records are displayed. For example if you have a relationship to the Addresses table occurrence, and 10 records are found through that relationship, then all changes for those 10 records are displayed.  It is important to note that this works based on the relationship criteria and not any layout based filtering, such as on portals.  If you use portal level filtering to obscure records from your users, then if you provide them access to this tool they would be able to see changes to records they are not permitted to view in the portal (another good reason why this is intended as a developer tool).


It is also worth noting that find mode is not supported in this dialog, as it is configured to show you changes to your current record. If you wish to change what it displays then you should change the record you are viewing on your layout.

If you wish, you can also leave this dialog open, and as you navigate to different layouts in your solution, the list of table occurrences will change depending on your context.

NOTE: If you change the record you are viewing, you will need to manually refresh the list of changes by re-selecting the table occurrence (this is a performance consideration for now).



Things to watch out for

Here we list a few things that you might need to watch out for when using elemental_log. While we take as much care as possible to ensure smooth sailing, there are certain actions that you may need to make manual adjustments for, or at least understand the implications.

Renaming a table

The base table name is an important piece of information which is used in both the log definition JSON (kept in elemental_log), as well as in your own files to determine which fields in a given table require changing.

While table names should not be changed often, if you do need to change a table name that is logged, you will need to make a manual adjustment (for now) to the JSON log definition.

This can be done via the Configure tool, selecting the View JSON Definition link.


Identify the old table name, and adjust accordingly.

Back in the configure user interface, you will need to use the refresh tables button to update the listing to reflect the new table name.


Renaming a field

When a logged field is renamed, we are unable at this stage to determine what the old name was .(technically it is possible with field ID storage in the log definition, but there are some performance considerations with this implementation, maybe in v2…)

If you rename a field, the old field name will still be retained in the JSON log definition.  If however you use the configure tool and select the table, we automatically run a process that clears the log definition of missing fields when the field listing is displayed.  This action will remove your renamed field from being logged.  

You will be required to re-add the field again for logging.  If you aren’t using the configure tool UI, then you can go into the JSON definition and edit the field name manually.

This can be done via the Configure tool, selecting the View JSON Definition link.


Identify the old field name, and adjust accordingly.


Renaming a table occurrence

The only table occurrence you need to worry about renaming is the LogFields one that is added from elemental_log (see here.) The name of this table occurrence is hard coded into the @ELEMENTAL_LOG custom function. If you simply must change it then you will need to change it in the custom function too.

You need to pay attention however if you delete the table occurrence that the two logging fields _elemental_log and _elemental_log_flag - auto-enter calculations are based on.  When you first paste these fields into your table, the very first table occurrence created for the base table is used as the context of these calculations. Usually this is a very heavily used table occurrence in your file, and so the risk of deleting it is low.  However if you do, then the auto-enter calculations will break.

These calculations explicitly rely on the context upon which they are run, because this is used to determine the name of the underlying base table, which is in turn used to determine which fields to log (as found in the log definition).

You can use any table occurrence you wish for these calculations, but make sure you don’t break them by removing the TO they’re based on.


Incorrect file names

When adding a new file into the Configure tool, make sure you specify the field name exactly correct. Failure to do so will mean the file will be unable to be opened by elemental_log.

Currently we only support logging of files that reside in the same folder as elemental_log, or files hosted on the same server.  In future versions we hope to open this up to logging any file on any server via single copy of elemental_log (though performance must be considered when using this implementation, which is why we have not done so yet).

If you set up a file to be logged, and then rename that file later, you must rename the file in elemental_log also.

Renaming of a file is not supported via the Configure interface, and must be done in the LogFiles table directly. You can access this layout via the home screen.

Once renamed, you should restart elemental_log for changes to take effect.


Multi-file setup configurations

elemental_log supports logging of multiple files.  Because those files are dynamically set up as records, elemental_log communicates with them via a script placed in each file.

This is achieved using Perform Script by Name and specifying a dynamic external data source through use of global variables for those sources.

When a file is added, it is given a specific number based on it’s creation order. This number will determine the external data source reference that it will use for future communication with elemental_log.

If you are adding lots of files, and happen to delete a file, then the number assigned to other files will become offset by 1.  This can lead to the incorrect file being opened by elemental_log as it will be opening the wrong reference.

While not common, it is worth being noted. If you do add and delete multiple files frequently - however unlikely - just make sure you close and reopen elemental_log for the data sources to refresh and correct themselves.



Further Support

If you cannot find the answer you are looking for here, or you have found a bug, or just have feedback in general, then please do not hesitate to get in touch.

You can email us at elemental@teamdf.com