

GemStone/S 64 Bitâ„¢ 3.7.5 is a significant new release of the GemStone/S 64 Bit object server, including a number of new features and feature enhancements and fixing several critical bugs. This release also adds support for Debian and for installation using APT.
These Release Notes include changes between the previous version of GemStone/S 64 Bit, v3.7.4.3, and 3.7.5. If you are upgrading from a version prior to 3.7.4.3, review the release notes for each intermediate release to see the full set of changes.
For details about installing GemStone/S 64 Bit 3.7.5 or upgrading from earlier versions of GemStone/S 64 Bit, see the GemStone/S 64 Bit Installation Guide for v3.7.5 for your platform.


GemStone/S 64 Bit version 3.7.5 is supported on the following platforms:
For more information and detailed requirements for each supported platforms, please refer to the GemStone/S 64 Bit v3.7.5 Installation Guide for that platform.

The following versions of GBS are supported with GemStone/S 64 Bit version 3.7.5:

VSD 5.6.5 is included with the GemStone distribution, and can also be downloaded as a separate product from https://gemtalksystems.com/vsd.

You must use GemBuilder for Java (GBJ) v3.2.1 with GemStone/S 64 Bit v3.7.5. GBJ v3.2.1 adds support for Windows clients, but has no other functional changes from v3.2.
There there were a number of significant infrastructure changes between GBJ v3.1.3 and v3.2; see the Release Notes for GemBuilder for Java v3.2 for more information on upgrading.

Rowan is an open-source source code management system for GemStone, with underlying git repositories managing tonel-format code source files. GemStone/S 64 Bit v3.7.5 includes Rowan v2.10 for legacy users and v3.5 for use with Jadeite for Pharo.
Jadeite is an open-source GUI application for developing and debugging Smalltalk code in a GemStone repository; currently alpha-level and suitable for testing and evaluation. Jadeite supports code management operations on rowanized source code in Rowan projects and packages. Jadeite for Pharo clients are installed into Pharo 13.0 images and can be used on Linux, Windows, and Mac. Issues with the Jadeite alpha should be reported as tickets on the Jadeite project, github.com/GemTalk/JadeiteForPharo/issues
For information and help on Jadeite and getting started, see the online help: docs.gemtalksystems.com/current/JadeiteHelp.
To find out more about the current status of these open-source projects, inquire on one of GemTalk’s email forums: the customer forum at gemstone-smalltalk@lists.gemtalksystems.com, or the GLASS forum at glass@lists.gemtalksystems.com.

Jadeite for Pharo clients can connect to GemStone servers running Rowan 3, in a repository based on $GEMSTONE/bin/extent0.rowan3.dbf. Currently, Rowan 3 cannot be installed in existing extent0.dbf-based repositories. Jadeite for Pharo provides support for Rowan code management, as well as code development and debugging support.
For more information and setup instructions, see: github.com/GemTalk/JadeiteForPharo/blob/main375/SetupForWithRowan.md

GemStone/S 64 Bit v3.7.5 includes a Rowan stub, which allows Jadeite to interact with an extent0.dbf-based repository, without a full Rowan installation. This allows Jadeite for Pharo to be used with an existing GemStone repository that has been upgraded to v3.7.5.
As in base GemStone, there is no code management support when running with the Rowan stub. Source code can be managed using filein/fileout to disk files. Monticello packages will be supported; but this support is not yet available.
The 3.7.5 distribution includes the RowanStubForJadeite installation under $GEMSTONE/examples/jadeite.
For more information and setup instructions, see: github.com/GemTalk/JadeiteForPharo/blob/main375/SetupForWithoutRowan.md


The version of zlib has been updated from 1.3 to 1.3.1
The version of openssl has been updated from 3.0.16 to openssl 3.0.19

GemStone is now supported on Debian 12 and 13.

GemStone distributions now include a .deb package format, which can be installed using APT (apt or dpkg). This is supported on Debian-compatible Linux distributions, including Ubuntu. The installation process and restrictions are similar to that for RPM for Red Hat.
An additional Installation Guide is provided, GS64-InstallGuide-LinuxAPT-3.7.5.pdf.

GemTalk now provides a PPA (Personal Package Archive), that allows you to install using apt directly, rather than using the downloaded .deb file. Instructions are at ppa.gemtalksystems.com

In v3.7.4, support was added for system log file rotation, including gem and server process logs, using the SIGHUP signal. The general rotation process is: to first move (or delete) the existing disk log file, then send SIGHUP to the process, which will start writing to a new log file with the previous name and mode.
This process can now be applied to files opened by the application using GsFile or GsLog (for FileSystem files).


The following method has been added:
GsFile >> reopen
If the underlying disk file no longer present, GsFile creates a new disk file with the same name and path and returns true. Reopen is done using freopen (see man 3 fopen), using the existing path and mode. If the disk file still exists, nothing is done and it returns false. If an error occurs, this method returns nil. Receiver must be a server file, with mode containing 'a' or 'w', and cannot be stdin, stdout, or stderr.

GsLog is a new class in v3.7.5, which provides a mechanism to do SIGHUP-based file rotation with a FileSystem-based log file. GsLog implements #reopen, used as with GsFile.
An instance of GsLog encapsulates a FileReference and a ZnCharacterWriteStream, allowing the underlying file to be reopened by a LogRotateNotification handler. GsLog is thread-safe. Writes to the log file are protected by a mutex. Reopen tries to lock the mutex and signals a Warning it cannot lock. If a file exists during initial open or during a reopen, the existing file is appended to. If the specified file does not exist, a new file is created.
Note that there is a risk of deadlock when file writes can occur during Ephemeron mourning. See the class comments for GsLog for more information.

The class LogRotateNotification has been added. An instance of this class is signalled asynchronously when the Gem process or topaz -l receives a SIGHUP. By default, it is not enabled. If enabled, it does not need to be reenabled after a signal is received. When enabled, handler for this notification should be installed that calls #reopen on each GsFile or GsLog that is configured for log rotation.

The application sets up a handler for the LogRotateNotification signal (using addDefaultHandler:). This handler sends reopen to the file being rotated. If more than one file is rotated, even if the rotation is on different schedules, the handler should reopen all of these files. A reopen is a no-op if the file was not moved or deleted.
At the OS level, log rotation of application GsFiles should move the disk file to an archive location, and send SIGHUP to the Gem.
Note that using mv within the local file system ensures no writes can be lost. If a file write occurs within the small gap between log archiving and sending SIGHUP, there is a risk a write may be lost if the archive process does a log copy and deletes the old log, including mv to a non-local file system destination.

In this example, myGsFile is a GsFile and myGsLog is an GsLog, both of which are open for write or append. The following handler reopens both.
LogRotateNotification enableSignalling.
LogRotateNotification addDefaultHandler: [:ex |
myGsFile reopen ifNil: [
self error: 'reopen failed ', myGsFile pathName].
myGsLog reopen.
ex resume.
].


The method Repository >> restoreFromBackup:* shuts down the reclaimGem, and when it completes, restarts the reclaimGem with an increased number of threads, to ensure reclaim does not hold up tranlog restore. Previously, the system computed the number of threads, which could be considerably larger and did not respect STN_MAX_GC_RECLAIM_SESSIONS. Now, the maximum number of threads is the configured STN_MAX_GC_RECLAIM_SESSIONS or 4, whichever is larger.
You should review your configuration of STN_MAX_GC_RECLAIM_SESSIONS, to ensure the setting is appropriate for your application. This is most important for repositories performing a large volume of tranlog restore.

In previous releases, when the session that initiated restore from tranlogs was terminated, the restore continued until all specified tranlogs were restored. Now, terminating the session will stop tranlog restore at that point. Restore can be continued by logging in and re-executing the tranlog restore operation.

After a programmatic restoreFromBackup* (of a repository in full tranlogging mode), you must login again to execute commitRestore. In cases where you do not need to restore tranlogs, the restore process can be simplified with the added method disableRestoreFromLogs. After executing this method (in both encrypted as well as non encrypted extent systems), the restoreFromBackup skips restoring tranlogs and automatically performs a commitRestore.
Note that any commits that are not in the programmatic backup are lost, since transaction logs are not replayed, and transaction logs cannot be replayed after commitRestore.
This method should be executed after login and before the restore operation. The status is set in SessionTemps, and only applies for that restore operation (since restore terminates the session).
Repository >> disableRestoreFromLogs
Causes the next restoreFromBackup operation (normal or secure) by the session to immediately commit the restore and return the repository to normal mode. Executing the method #commitRestore is unnecessary.

Note:
ParallelDo is a preview feature in v3.7.5. Additional large-scale testing is needed, and the API is subject to change.
The ParallelDo framework allows you to concurrently process a large number of objects across multiple Gems running on multiple hosts. This is designed for operations such as migrate, which may require modifying many objects. The objects are provided in a collection, to a manager gem; the objects are divided into chunks and distributed over the worker gems to do the actual processing.
The worker gems run asynchronously, and automatically commit to avoid out of memory issues, automatically catch and retry failed commits, and capture other errors for later examination.



To start with, create an instance of ParallelDoManager, and provide it with the collection of objects to be processed and the processing code in a block.
All objects to be processed must be provided in a single collection, typically an Array or IdentitySet. The manager divides this collection into subcollections (called chunks). The maximum size of each chunk is specified by the user and may be no larger than 2034 elements, to ensure all chunks remain small objects.
The processing block must be a one-argument block. For example:
[:obj | obj migrate ]
Use makeCurrent so this ParallelDoManager instance is saved in this session’s UserGlobals, and can be accessed using ParallelDoManager current.
(ParallelDoManager
newForCollection: myColl
workerBlock: [:obj | obj migrate ]) makeCurrent.

Each host must have the host name and the number of sessions specified. There are methods to automatically set the host name for workers running on the Stone’s host.
To use host with a default configuration, use methods such as
ParallelDoManager >> addLocalHostWithNumSessions: numSessions
ParallelDoManager >> addHostWithName: hostname withNumSessions: numSessions
parallelDoMgr
addLocalHostWithNumSessions: 2;
addHostWithName: 'lark' withNumSessions: 4.
Hosts (instances of ParallelDoHost) can be configured, with both Gem configuration options and remote cache options. To do this, create an instance of ParallelDoHost and configure it before adding it to the ParallelDoManager.
You may specify configuration for the gem
printCommitConflictDetails:
gemStatmonitorArgs:
tempObjCacheSizeMb:
tempObjCacheSizeGb:
sharedCacheSizeGb:
sharedCacheSizeMb:
sharedCacheLargeMemoryPagePolicy:
cacheWarmerArgs:
sharedCacheLargeMemoryPageSizeMb:
The following example configures a remote host with gem and cache configuration options:
aHost := ParallelDoHost
newForHostname: 'lark' withNumberOfSessions: 4.
aHost
sharedCacheSizeGb: 4 ;
tempObjCacheSizeMb: 200 ;
sharedCacheLargeMemoryPagePolicy: 2 ;
sharedCacheLargeMemoryPageSizeMb: 1024 ;
cacheWarmerArgs: '-d -n 2 -L /tmp' ;
printCommitConflictDetails: 2;
gemStatmonitorArgs: '-i1 -u 3 -z -f /tmp/statmon1s_%%S_%%P_%d-%m-%y-%H:%M:%S'.
parallelDoMgr addHost: aHost.

Before performing the processing, the manager must create chunks from the collection, create the workers and log them in.
Each worker searches for a chunk to process which is not locked and has not been processed by another worker. The worker does this by requesting a write lock on a candidate chunk. If the lock is granted and the last element in the chunk is not nil, then the worker processes the chunk while holding the write lock on it. The write lock guarantees a given chunk will be processed by no more than one worker.
As the worker enumerates a locked chunk, objects which have been processed are removed from it by storing nil. When a chunk is completely processed, all elements will be nil. The write lock on the chunk will be released by the next successful commit.
The next chunk available for processing is stored in global cache statistic. This serves as a hint to workers to indicate where in the list of chunks to start searching for the next chunk to process.
Workers are automatically logged out when they have completed their assigned elements, or on fatal error.
Gem logs for all workers are automatically retained and are not subject to deletion upon clean logout.

Collect errors and the worker report, using collectErrors and createWorkerReport.
createWorkerReport will provide the results including the number of errors that occurred. If there are errors, examine the objects in ParallelDoManager current allErrors., which include the call stacks for the error as well as the objects on which the error occurred.

The following example uses two workers on localhost and two workers on a remote host, including configuration of the remote cache, to migrate objects in the collection theColl.
| parallelDoMgr aHost |
parallelDoMgr := ParallelDoManager
newForCollection: theColl
workerBlock: [:obj | obj migrate].
parallelDoMgr makeCurrent.
parallelDoMgr addLocalHostWithNumSessions: 2.
aHost := ParallelDoHost
newForHostname: 'lark' withNumberOfSessions: 2.
aHost
sharedCacheSizeGb: 4;
tempObjCacheSizeMb: 200.
parallelDoMgr addHost: aHost.
ParallelDoManager current
createChunks;
prepareToRun;
run;
pollUntilCompleteAtInterval: 500;
collectErrors;
createWorkerReport.
The final call to createWorkerReport prints a report formatted in columns:


When code executed by a GsTsExternalSession returns a kind of string, the external session automatically converts the result into a string in the calling session.
Previously, only remote objects of type String or Unicode7 were correctly converted, since these hold Characters requiring no more than 8 bits. The behavior for these objects is unchanged; the local object created for the remote object is of the same class as the remote object.
Remote objects of DoubleByteString, QuadByteString, Unicode16 and Unicode32 were previously returned as ByteArrays. Now, these values are automatically encoded into instance of Utf8 prior to returning. This is the same process as the recommended workaround in previous releases.
Instance of Utf8 were, and are, automatically converted to a local object of the appropriate class:
Note that when the remote execution’s return values are traditional or unicode strings that do not match the default classes for the local repository’s Unicode Comparison Mode (most commonly if the remote repository has a different Unicode Comparison Mode than the local repository), the class of the result in the local repository may vary if some require encoding and others do not.
If you previously were not using the workaround, but doing the multi-step process required to decode the ByteArray into a kind of string, this will no longer work; you will need to remove the manual decode.
Note that legacy GsExternalSession has no change in behavior.


When the input to a GsTsExternalSession executed is a Utf8, the compiler may have attempted to parse beyond the end of the decoded input. This resulted in a compiler error (#51543)

When a compile error occurred in the external session code, during resolveResult: it could have encountered an error while processing the error. (#51563)

Objects returned from the remote session have their OOPs added to the remote session’s export set. Non-string type objects must be manually removed from the remote export set. String type objects are removed periodically. To force removal, the following method has been added.
GsTsExternalSession >> flushReleaseOops

Detached execution refers to RPC Gems that are started using GsTsExternalSession forkAndDetach* methods, or the newly added GsHostProcess >> forkAndDetach (GsHostProcess >> forkAndDetach).

Previously, when the code provided to the forkAndDetach* method completed, control returned to the instance of GsTsExternalSession and further executions was possible in that session. This was not intended behavior, and had the risk of the session holding the commit record. Now, the remote detached execution Gem will logout when it completes execution of the argument code.

When a Gem was running with detached execution, and topaz DEBUGGEM was used to attach to that Gem, the resume command could not be used to resume; the detached execution Gem had to be terminated. (#51457)

In normal (non-detached) external session code execution, errors in the remote code are reported in the calling session, but this is not possible in detached execution. In detached execution, the stacks are printed in the Gem log for the detached execution Gem. Previously, these stacks did not include enough information for debugging; now, stacks with arguments are printed. (#51541)

When a Gem runs with detached execution, unhandled errors cause stacks to be written to the log of the detached execution Gem. Previously, CompileErrors with handlers were excepted, and were logged in spite of the handler. Now, a CompileError that is caught and handled will not have a stack printed in the Gem log. (#51512)

The relationship between, and the organization of behavior, between DateAndTime and its superclass, DateAndTimeANSI, have been modified and cleaned up in this release. Behavior has been moved from one class to the other. See also DateTime removed/deprecated methods.

DateAndTimeANSI was conceptually intended to be an dialect-neutral superclass containing the ANSI-specified DateAndTime behavior, while the DateAndTime implementation itself was GemStone-specific extensions and implementation details. However, both were previously concrete, and instances of DateAndTimeANSI could be created. With the introduction of SmallDateAndTime, issues were introduced on some code paths. DateAndTimeANSI is now abstract.

The following instance creation method has been added:
DateAndTime class >> posixSeconds: secondsSince1970 offsetSeconds: offsetSeconds
Return a SmallDateAndTime if arguments are in range. aNumber is UTC time in seconds since the beginning of 2001, anOffset is offset from UTC.

The following methods have been added, allowing conversion between the ANSI DateAndTime class and the legacy DateTime class.
DateTime instances know their specific TimeZone, but DateAndTime instances only know their offset, which could correspond to multiple different TimeZones. Two conversion methods are provided so you can either use the current TimeZone or provide a specific TimeZone as an argument for the converted DateTime instance.
DateAndTime >> asDateTime
Answer a DateTime representing the same moment in time as the receiver, in the current TimeZone. The offset of the receiver is ignored. DateTimes know their TimeZone, and the TimeZone knows about any daylight saving time transitions, but DateAndTimes only know their offset. Most offsets can occur in several different TimeZones, so we cannot infer the TimeZone from the offset.
DateAndTime >> asDateTimeInTimeZone: aTimeZone
Answer a DateTime representing the same moment in time as the receiver, in the given TimeZone. The offset of the receiver is ignored.
DateTime >> asDateAndTime
Answer a DateAndTime representing the same moment in time as the receiver, and with the offset appropriate to the receiver's TimeZone as of that moment.

The method DateAndTime >> asDate has been added, to extract the Date from a DateAndTime without needing the expensive conversion to a DateTime.

When daylight saving time causes the clock to be set forward or back. there is a period of time in which there is either no local time, or two ambiguous local times. This was previously handled similarly to the legacy non-ANSI DateTime; this has been changed so DateAndTime conforms to the ANSI spec.
When the time is set back at the end of daylight saving time, there two local times corresponding to a single UTC time. ANSI specifies that an ambiguous local time should resolve to the earlier of the two local times. Previously, ambiguous DateAndTimes were resolved for the later of the two local times; now, the former one is created, to conform to ANSI.
When creating a DateAndTime from a local time that does not exist, previously this was allowed and created DateAndTime one hour later; for example, creating a DateAndTime for the non-existent time of 2:30 while daylight saving time is starting, created the actual local time of 3:30. The ANSI standard specifies that this should error, and attempting to create this nonexistent local DateAndTime now signals an error.
Note that the older, non-ANSI DateTime class resolves ambiguous times to the earlier of the two, and handles nonexistant times by creating the equivalant later time; DateTime is unchanged, for historic reasons. If your application requires rigorous handling of date-time values during DST transitions, it is recommended that you use UTC times, which are never ambiguous.

When the second or hour argument to DateAndTimeANSI class>>year:day:hour:minute:second:offset: was out of range, the error message provided an incorrect upper limit (off by one) (#51523)

DateTimes in TimeZones with a partial negative offset, that is, in a TimeZone east of UTC and west of the International Date Line, and with a offset that is not an increment of a whole hour, produced DateTimes with an offset that was incorrect by one hour. (#51657)

The following methods could produce incorrect results, and the functionality has been removed; they will now return an error. The methods remain to provide the replacement method, and send deprecated: for deprecation tracking although they are not technically deprecated.
DateAndTimeANSI class >> secondsLocal:offset:
Use secondsUTC:offset: with the local time converted to UTC.
TimeZoneInfo >> offsetAtLocal:
Use offsetAtUTC: with the local time converted to UTC.
TimeZoneInfo >> transitionAtLocal:
Use transitionAtUTC: with the local time converted to UTC.
TimeZoneTransition >> transitionTimeLocal
Convert the result of transtionTimeUTC to local time.

TimeZoneTransition >> asDateAndTime
Answer the transition moment expressed as a DateAndTime with the offset being transitioned to.
TimeZoneInfo >> transitionsAtOffsetPosixSeconds: seconds
Given a local 'wall clock' time encoded as the seconds relative to Jan 1, 1970 in the time zone defined by the receiver, answer a collection of transitions which would result in that time. Normally the result is size 1, but may 0 or 2 as a result of DST.


Scans to find a reference path to an object, using GsSingleRefPathFinder or the recently added methods findReferencePath and findReferencePathString, now can be used to find paths to instances of Class or Metaclass3.
The GsSingleRefPathFinder printToLog instance variable is now a SmallInteger rather than a Boolean, with the following values:

A new method has been added to find all objects in the repository that have only a single unique reference.
Repository >> listObjectsWithOneReferenceWithMaxThreads: maxThreads waitForLock: lockWaitTime percentCpuActiveLimit: percentCpu
Scans the entire repository for objects referenced once and only once by any other object, and returns these as a GsBitmap.
Multiple references from a parent object to a child object are considered a single references and the child object will be included in the result. Objects which reference themselves will be included in the result if that reference is the only one.
This method begins a transaction and runs in this transaction for its duration. It should therefore not be used in production systems due to commit record backlogs which may cause excessive repository growth.
Uncommitted objects and dead not reclaimed objects are excluded from the result. Note that objects in the result set may be disconnected from the repository (unreachable) and therefore could disappear after the next garbage collection cycle.
Raises an error if the session has modified persistent objects. Raises an error if a garbage collection operation is in progress with the repository vote state is not zero (see System class >> voteState). The lockWaitTime argument is used to specify how many seconds method should wait while attempting to acquire the gcLock. No other garbage collection operations may be started or in progress while this method is running. There also must be no outstanding possible dead objects in the system for the GC lock to be granted.
Starts maxThreads on the host system and allows the host to run up to percentCpu percent CPU usage. A page buffer of 16 pages (256 KB) is allocated per thread.

The following method has been added:
GsBitmap >> allCommittedInstancesOf: aClass
Searches the receiver for all committed objects which are instances of aClass, and returns a new GsBitmap containing those instances. Uncommitted objects present in the receiver are ignored, and not included in the result.

Starting in v3.7.2, the AdminGem only runs when needed: after markForCollection to handle voting, and if epoch GC is enabled.
The configuration method System class >> setAdminConfig:toValue:, which transiently sets values in the currently running AdminGem, usually had no effect if Epoch was not enabled. Now, this method returns an error if invoked when the AdminGem is not running.
Use the method System class >> setPersitentAdminConfig:toValue:, to define the configuration for the current AdminGem and which will be used the next time it starts up.
Method comments relating to the AdminGem have been improved; and some minor issues in GcGem session management have been fixed.
The following methods have been added:
System class >> getPersistentAdminConfig: configSymbol
Return the AdminGem persistent configuration settings, which are values in GcUser's UserGlobals. May differ from current runtime settings. See getAdminConfig: for relevant symbols.
System class >> getPersistentReclaimConfig: configSymbol
Return the ReclaimGem's persistent configuration settings, which are values in GcUser's UserGlobals. May differ from current runtime settings. See getReclaimConfig: for relevant symbols.


The following methods have been added:
System class >> sessionLocks: aSessionId
Returns an Array describing locks held by the specified session, including
1. an Array of read-locked objects
2. an Array of write-locked objects
3. an Array of objects with deferred unlocks
Deferred unlocks are objects for which the unlock request was received by stone while another session was holding the commit token; they will be unlocked as soon as the commit token is released.
System class >> sessionLocksReport: aSessionId
Returns a String describing all of the objects locked by the specified session.

The report generated by System class >> currentSessionNames now includes the transaction level and the voting status. For example:
2 GcUser reclaimgcgem 1642585 on localhost
3 SymbolUser symbolgem 1642586 on localhost
4 DataCurator topaz -l 1705674 on localhost transactionLevel 1, this session
5 SystemUser gem 1693847 on localhost transactionLevel 1 not voted
6 GcUser admingcgem 1705766 on localhost

The following methods have been added:
System class >> cachesList
Return an Array describing all shared caches that the stone process is managing, including the cache on the stone machine. Result contains one element per cache. Each element of result is a 8 element Array containing:
- hostName, a String
- a Boolean - true if cache was created as a mid-level cache
- a SmallInteger - total number of sessions connected to the cache
- a SmallInteger - max number of sessions for the cache
- a SmallInteger - size of cache in KB
- a SmallInteger - number of sessions using cache as a mid-level cache
- a SmallInteger - zero or sessionId of hostagent on stone host servicing the cache
- a String - ipAddress of the host of the cache
- a SmallInteger - zero or sessionId hostagent running on a mid-level cache host.
System class >> cachesReport
Return the result of cachesList formatted as a string, one line per cache.
System class >> reportForSession: sessionId
Returns a report containing internal information about the specified session. For example:
userId DataCurator, transactionLevel 1, viewTime 28 seconds ago, onOldestCr true, commitsSinceView 0,
gemProcessId 3814914, gemHost localhost, gciPeer 127.0.0.1, login 28 seconds ago, tempOops 1999

Methods have been added to provide information about class categories. These methods accept and return kinds of String, since they are designed to support GUI tools. These category methods treat hyphen-delimited class category strings as representing a hierarchal structure and the added methods are provided to query both for specific full category strings and association with logically-inherited supercategories.
ClassOrganizer >> classCategoryNames
ClassOrganizer >> allClassCategoryNames
ClassOrganizer >> classCategoryNamesInDictionaryName: dictionaryName
ClassOrganizer >> allClassCategoryNamesInDictionaryName: dictionaryName
ClassOrganizer >> classNamesInClassCategoryNamed: categoryString dictionaryName: dictionaryName
ClassOrganizer >> classNamesUnderClassCategoryNamed: categoryString dictionaryName: dictionaryName


Similar to other class creation methods, but allows omission of the classInstVarNames: and poolDictionary: keywords.

The method GsFile class >> contentsOfServerDirectory: has been added. This is a convenience methods invoking contentsOfDirectory:onClient:.

This method forks a child process, similar to the existing fork method. The child process will not be killed when this session's gem or topaz -l process exits, nor will it be killed when the receiver is finalized by in-memory GC of temporary objects.

Sets the receiver instance of Locale to be the instance returned by Locale current.

The following methods have been added:
Repository >> fullBackupGzCompressedTo: fileNames MBytes: mByteLimit threads: numThreads
Repository >> fullBackupLz4CompressedTo: fileNames MBytes: mByteLimit threads: numThreads
These are variants of the existing fullbackup methods that provide specification of both extent sizes and the number of threads.

Return the value of the cache stat with the given offset, which must be >=1 and <= size of the Array returned by the cacheStatisticsDescription* method applicable to the stone process. Returns nil if anInt is out of range. This method may be used on hosts remote from the stone process.


The method System class >> myUserGlobals has been un-deprecated. UserGlobals has been a required SymbolDictionary for several major releases. It is used by the new ParallelDo feature.


DateAndTime >> currentTimeZone
DateAndTime class >> secondsSince2001
DateAndTimeANSI >> initializeAsNow

These methods were unreliable and the functionality has been removed; they remain in the image, and send deprecated:, for ease in tracking and additional information.
DateAndTimeANSI class >> secondsLocal:offset:
TimeZoneInfo >> offsetAtLocal:
TimeZoneInfo >> transitionAtLocal:
TimeZoneTransition >> transitionTimeLocal

The following private or implementation methods have been removed as part of the cleanup/reorganization of DateAndTime and DateAndTimeANSI.
DateAndTime class >> _scaledDecimal6mantissa:
DateAndTime class >> _smallFromMicrosecs:offset:
DateAndTimeANSI >> asFloatParts
DateAndTimeANSI >> partsFrom:
DateAndTimeANSI >> printString (now inherited)
DateAndTimeANSI >> _posixSeconds:offset:
DateAndTimeANSI >> _seconds:
DateAndTimeANSI >> _secondsLocal:offset:
DateAndTimeANSI class >> _zoneOffsetAtLocal:
DateAndTimeANSI class >> _zoneOffsetAtUTC:

The following list includes private methods that have been removed from DateAndTimeANSI, and added to DateAndTime or were already present.
DateAndTimeANSI >> asDays
DateAndTimeANSI >> asPosixSeconds
DateAndTimeANSI >> asString
DateAndTimeANSI >> beRounded (now shouldNotImplement)
DateAndTimeANSI >> currentTimeZone
DateAndTimeANSI >> posixSeconds:
DateAndTimeANSI >> printJsonOn:
DateAndTimeANSI >> printRoundedOn:
DateAndTimeANSI >> printStringWithRoundedSeconds
DateAndTimeANSI >> rounded
DateAndTimeANSI >> _secondsUTC:offsetSeconds:
DateAndTimeANSI class >> fromString:
DateAndTimeANSI class >> migrateNew
DateAndTimeANSI class >> posixSeconds:offset:XX
DateAndTimeANSI class >> secondsUTC:offset:
DateAndTimeANSI class >> _zoneOffsetAtPosix:

The following methods have been removed:
GsSingleRefPathFinder >> printTimestampToLog
ProcessorScheduler >> dbgfatallog:
Repository >> _getShrinkRepository
Repository >> _primRestoreSecureBackups:scavPercentFree:bufSize:privateDecryptionKey:passphrase:numThreads:shrinkRepos:newSystemUserPassword:
Repository >> _restoreBackups:scavPercentFree:bufSize:numThreads:shrinkRepos:newSystemUserPassword:


When startlogsender is invoked to write split tranlogs (using the -F and -W flags), the split tranlogs are created using the default umask & 666, and each one is made read-only when closed. Previously, the mask was 400, removing group and world read; now, the previous file mode & 0444 is applied, allowing group (or world) read depending on the default umask.

Previously, the only way to detect errors in configuration files was to attempt to start the Stone, or login, using that configuration file.
Now, you can validate a configuration file for use for the Stone, Gem, or X-509 secured Netldi. Note that this checks for syntax errors and required settings, but is not designed to determine that a particular set of extents can start up using that configuration file, e.g. for missing extents or available RAM.

The utility script validate_config has been added. This script takes two arguments: the path to the script, and the type; an integer indicating Stone (1), Gem (2), or X509-secured Netldi (3).
os$ validate_config -t 1 -p $GEMSTONE/data/system.conf
validate_config accepts posix "long" form argument syntax:
os$ validate_config --type=1 --path=$GEMSTONE/data/system.conf
The script logs in a superDoit solo session to execute the validation. If errors are found, the details are printed, and a message that validation failed; the script returns 1. If validation succeeds, it prints 'Validation succeeded' and returns 0.

A method has been added that supports the config file validation. This method can only be run in a solo session. This method is intended to be used by OS level utility scripts, but can be invoked directly.
System class >> parseConfigFile: fileNameString processType: typeInt
fileNameString must be a kind of String containing the path to a configuration file, which may be absolute or relative.. typeInt must be the SmallInteger 1, 2, or 3, specifying how file should be parsed.
1 - parse as a Stone config file
2 - parse as a Gem config file
3 - parse as an X509-secured Netldi config file
If the file fails to parse, it prints an error with the parse details and signals an Error with number 2710. If the file parses correctly, this method returns self. This method can only be run in a solo session. Note that settings in the configuration file fileNameString for a type 2, Gem configuration file are applied to the solo session executing this method.

The largememorypages calculation relies on an accurate value for SHR_PAGE_CACHE_NUM_PROCS for the machine that will be configured with Linux Huge Pages.
SHR_PAGE_CACHE_NUM_PROCS is normally left at a default value in configuration files. When this is at the default, GemStone calculates a value based on a number of individual configuration parameters, including STN_MAX_SESSIONS, but also including, for example, the number of reclaim threads and the number of cachewarmer threads. These parameters also support defaults that allow GemStone to calculate a value based on the environment, including the number of cores.
If SHR_PAGE_CACHE_NUM_PROCS is not explicitly set, and the -P argument is not used, the results are particularly inaccurate for remote cache calculation using largememorypages -r.
For calculations for the Stone’s host:

The statprom utility provides an interface to use Prometheus to monitor GemStone by recording statistics from the GemStone shared page cache. In this version, statprom code is packaged in a new shared library, libstatprom-3.7.5-64.so. This simplifies installation in some configurations.


Topaz now supports the -P script argument. This is similar to -S, but while -S inputs the script file argument and exits if no error occurs, -P inputs the script file argument but does not exit, unless there is an explicit exit in the argument script
When using -P, quit and exit are considered errors if they occur in a nested input file, other than the -P argument script itself.

Using the -I option to Topaz is designed to allow input of an initialization script. With this option, echo is suppressed unless an error occurs; when an error occurs, the entire output is printed, but passwords are redacted.

An instance of ExitClientError can be signalled when the application wants topaz or other GCI client to exit with a specific error status.
Now, if ExitClientError is signalled to the GCI of an interactive linked topaz process, and the linked topaz was configured with GEM_LISTEN_FOR_DEBUG=true in a configuration file or on the command line, or if System listenForDebugConnection was executed, then topaz will stop at the command line, to allow the ExitClientError to be debugged, and not exit.

If the environment variable GS_TOPAZ_AUTO_RESULTCHECK is defined in topaz's environment, then result checking is automatically enabled, equivalent to DISPLAY RESULTCHECK. See the help text for DISPLAY RESULTCHECK for more details.

Superdoit script variants previously were designed to execute Smalltalk code. A new variant, topaz, has been added to allow executing topaz expressions as well as embedded Smalltalk code.
The following examples have been added in $GEMSTONE/examples/superDoit:
error.topaz
simple.topaz
template.topaz
These scripts accept and use topaz command line arguments, and the contents can include any topaz commands. You may used default or explicit .topazini, or embed credentials in the script itself.
With topaz scripts, to avoid echoing the entire output, it is recommended to use the -q command line argument, and use GsFile gciLogServer: for any output intended for the user.

Log file names for all GemStone processes can now be composed using patterns. Previously, NRS #log: directives provided some patterns for the Gem log. Now, patterns can be applied to Stone, Netldi, and other system log files and X509-Secured GemStone gem log files; and additional pattern elements have been added. The new patterns are compatible with, and usable in, NRS.
If log file patterns are not used, log file names are handled as in previous releases.
The Stone’s child processes and threads that produce logs, including shrpcmon, symbol gem, reclaim gem and admin gem, cache warmer, openId, and login log, inherit their pattern from the Stone’s log pattern provided by the -l argument or the $GEMSTONE_LOG environment variable.
This can be overridden by the new environment variable GEMSTONE_CHILD_LOG_PATTERN. This controls the log name for the Stone’s child process/thread logs, but does not affect the Stone’s log file name.
%P and either %N or %T are required in GEMSTONE_CHILD_LOG_PATTERN, to avoid multiple processes writing to the same log file or unrecognizable log file names. If either is missing, the log pattern is ignored and the default log file name is used. A message is written to the log file in this case.
If %Q (listening port) is used in the Netldi log file name, the value of -P command line parameter will be used, or a port specified in $GEMSTONE_NRS_ALL. If these are not available, a port assigned in the services database will be u sed. If the Netldi uses a random listening port assigned by the OS, then it is not possible to include the port number in the log file name; it will be omitted.
A new man page, logfilenames(5), has been added, as well as a short summary of formats in the -h output from processes that accept the -l argument.


startstone -l %S-%T-%%Y-%%m-%%d-%%H:%%M:%%S.log
Produces system log files with the names such as the following. Note that log names, other than the stone's log, are the default, since the pattern does not include %P.
gs64stone-stone-2026-01-07-14:44.log
gs64stone_390702reclaimgcgem.log
gs64stone_390703symbolgem.log
gs64stone_390671pcmon.log
gs64stone_login_2026-01-07-14:44:08.974.log

EXPORT GEMSTONE_CHILD_LOG_PATTERN=GS_%%Y-%%m-%%d-%%H:%%M:%%S_%T_%P.log
startstone -l GS_%S_%%Y-%%m-%%d.log
Produces the log files with names such as the following. The child logs use the specified format.
GS_gs64stone_2026-01-07.log
GS_2026-01-07-14:25:14_pcmon_389149.log
GS_2026-01-07-14:25:14_reclaimgcgem_389180.log
GS_2026-01-07-14:25:14_symbolgem_389181.log
GS_2026-01-07-14:25:14_login_389147.log

startnetldi -g -l $GEMSTONE/logs/%T%Q.log -D $GEMSTONE/logs/ 34567
and in topaz RPC, logging in with:
set gemnetid !#netldi:34567#log:%N_%%Y%%m%%d-%%H:%%M:%%S.%%q_%P.log!gemnetobject
Writes the Netldi and Gem logs to the directory $GEMSTONE/logs/ (which must exist, or startup and login will fail), with the following log file names.
$GEMSTONE/logs/netldi34567.log
$GEMSTONE/logs/gemnetobject_20260107-13:46:39.238_385695.log


This release includes improved handling of remote caches over unreliable networks. (#51593). Note that 3.7.5 also includes the fix for bug #51839 which was present in the 3.6.9 LD release.
A new configuration parameter has been added to automatically attempt a reconnect after a remote cache connection timeout, rather than terminating the remote cache and the sessions on that cache.

STN_REMOTE_CACHE_RECONNECT_TIMEOUT
Maximum time in seconds that the page manager thread in stone will wait for a remote cache that has disconnected to reconnect. This setting does not apply to remote X509 caches, which are always terminated when STN_REMOTE_CACHE_PGSVR_TIMEOUT expires.
Note that the Stone makes automatic adjustments to synchronize parameters:
A value of -1 disables reconnect behavior; with this setting, the expiration of STN_REMOTE_CACHE_PGSVR_TIMEOUT causes immediate shutdown of that cache and sessions on that cache.
The default is 2.5 * the STN_REMOTE_CACHE_PGSVR_TIMEOUT, which by default is 15 seconds. With both settings at default, the reconnect timeout is 37 seconds.
Runtime equivalent: #StnRemoteCacheReconnectTimeout (requires SystemControl privilege)

The following statistics have been added:
RemoteCacheReconnectCount (Stone)
Number of successful reconnects by remote caches after a network timeout.
RemoteCacheTimeoutCount (Stone)
Number of network timeouts seen by stone page manager thread when communicating to remote cache pgsvrs.

When a linked login on a remote node triggers remote cache monitoring due to the configuration parameter GEM_STATMONITOR_ARGS, and the -f argument was a relative file path, the statmonitor data file was put into the home directory of the Unix user of the linked process.
Now, the data file is written to the working directory of the linked process that triggered the remote cache creation.


New arguments have been added to startnetldi to enable reporting of the number and duration of calls to PAM for authentication, and to trigger more detailed reporting if the duration takes more than a configured duration threshold. Reports are not printed if there has been no activity during the preceding interval.
This feature is only valid when the NetLDI is authenticating the user via PAM, that is, not in guest mode (-g); and not for the X509-secured Netldi (-S).
The added startnetldi arguments are:
-y printIntervalSecs
Interval in seconds to print statistics on calls to PAM to the log file. Not compatible with -g or -S. When the -d flag (to enable debugging ) is used and -y is not specified, a default of -y300 is inferred.
-z printThresholdMs
Print detailed statistics on calls to PAM when any PAM call takes longer than printThresholdMa milliseconds. Not compatible with -g or -S.

Example of an interval report:
--- 10/06/25 12:10:13 PDT
PAM Performance: Calls total: 55, ok: 52, fail: 3, Time avg: 8 ms, high: 103 ms, low: 0 ms, Threads: 1
Example of a threshold report:
--- 10/06/25 12:18:27 PDT PAM call exceeded threshold of 5 ms
Duration: 17 ms, success: 1, user: 'gsadmin'
Function Duration calls ok fail retry err
=================================================
getpwnam 0 1 1 0 0 0
pam_start 1 1 1 0 0 0
pam_authenticate 14 1 1 0 0 0
pam_end 0 1 1 0 0 0
=================================================
Example of a report when an error occurred:
--- 10/06/25 12:29:25 PDT PAM call exceeded threshold of 5 ms
Duration: 6 ms, success: 0, user: 'gsadmin'
err msg: 'Password validation failed for user gsadmin, pam_authenticate error:7, Authentication failure'
Function Duration calls ok fail retry err
=================================================
getpwnam 0 1 1 0 0 0
pam_start 2 1 1 0 0 0
pam_authenticate 4 1 0 1 0 7
pam_end 0 0 0 0 0 0
=================================================

New arguments have been added to startnetldi to allow it to record statistics to a vsd-compatible statistics data file. This process does not use statmonitor, and does not attach to the shared page cache; the statistics are written directly to disk by the Netldi.
The interface to manage writing statistics is basic; most of the many statmonitor features have not been implemented for the Netldi. In particular, note that statistics data will be written to a single file for the lifetime of the Netldi. To manage these statistics data files, you will need to regularly restart the Netldi, providing a new -f filename argument.
The Netldi-generated statistics data files includes Netldi and OS system statistics only. The Netldi internal stats (see Added Netldi-specific statistics) are not accessible using statmonitor or from Smalltalk methods.
-f statFileNameOrDir
A file name or directory for writing statistics data. Requires -j. Also enables writing a PAM summary (-y) on Netldi shutdown only.
If a file name is given, the file must not exist. Files with a .gz suffix will be gzip compressed; files with a .lz4 suffix will be lz4 compressed. Otherwise file is not compressed.
If a directory is given, it must exist; with a directory argument, a file with a default file name containing the Netldi name and a timestamp with a .gz suffix will be created.
-j intervalSecs
Sample interval in seconds to write statistics data to the statistics data file. Requires -f.

The following statistics are collected only when the startnetldi cache statistics feature is enabled; these are not taken from the shared cache and are not accessible using statmonitor or from Smalltalk methods.
In addition to these statistics, the standard OS statistics for the netldid process and Linux system statistics are also recorded by the Netldi.
ClientRequestAvgTime
Average time in microseconds that a client request takes to process.
ClientRequestCount
Total number of client requests processed by netldi.
ClientRequestFailCount
Number of failed client requests processed by netldi.
ClientRequestOkCount
Number of successful client requests processed by netldi.
ClientRequestTotalTime
Total amount of time in milliseconds netldi has spent processing client requests.
ClientThreadsActiveCount
Number of client threads active in the netldi.
ClientThreadsConfigCount
Maximum number of client threads the netldi is configured for.
ClientThreadsWaitAvgTime
Average time in microseconds spent waiting for a free client thread.
ClientThreadsWaitCount
Number of times a request waited for a free client thread.
ClientThreadsWaitTotalTime
Total amount of time in milliseconds spent waiting for a free client thread.
ForkAvgTime
Average amount of time in microseconds spent in a call to fork.
ForkCount
Total number of forks performed by netldi.
ForkFailedCount
Number of failed forks performed by netldi.
ForkOkCount
Number of successful forks performed by netldi.
ForkTotalTime
Total amount of time in milliseconds spent in calls to fork.
PamCallsAvgTime
Average duration in microseconds all calls to PAM.
PamCallsCount
Number of calls to PAM
PamCallsFailedCount
Number of failed calls to PAM
PamCallsFastestTime
Duration in milliseconds of the fastest call to PAM.
PamCallsOkCount
Number of successful calls to PAM
PamCallsSlowestTime
Duration in milliseconds of the slowest call to PAM.


The Netldi now supports collecting cache statistics, and a number of Netldi-specific statistics have been added. See Netldi added ability to record statistics for VSD.

Aliases allow meaningful names to be attached to SessionStat00-SessionStat48. The aliases had become incorrect for the GcAdmin gem, so the displayed meaningful names were incorrect. Note this fix is in VSD v5.6.5 that is bundled with the GemStone distribution, not in the GemStone/S server itself.

GemMemoryFootPrintKb (Gem)
Approximate total memory footprint of allocated temp obj memory.

Previously, Linux system statistics for HugePages did not differentiate between different memory page sizes. These old statistics have been replaced by new ones that include the page size in the name.
The following statistics have been removed:
The following statistics have been added. Note that the unit of these statistics is pages, not KB/MB.
HugePages2MbTotal
The total number of 2 Mb huge memory pages configured.
HugePages2MbFree
The number of free 2 Mb huge memory pages.
HugePages2MbRsvd
The number of reserved 2 Mb huge memory pages.
HugePages2MbSurp
The number of surplus 2 Mb huge memory pages.
HugePages1GbTotal
The total number of 1 Gb huge memory pages configured.
HugePages1GbFree
The number of free 1 Gb huge memory pages.
HugePages1GbRsvd
The number of reserved 1 Gb huge memory pages.
HugePages1GbSurp
The number of surplus 1 Gb huge memory pages.
Linux ARM supports, in addition to 2MB and 1 GB huge pages, the additional huge memory page sizes 64 KB and 32 MB. The following are reported on Linux/ARM only:
HugePages64KbTotal
The total number of 64 Kb huge memory pages configured.
HugePages64KbFree
The number of free 64 Kb huge memory pages.
HugePages64KbRsvd
The number of reserved 64 Kb huge memory pages.
HugePages64KbSurp
The number of surplus 64 Kb huge memory pages.
HugePages32MbTotal
The total number of 32 Mb huge memory pages configured.
HugePages32MbFree
The number of free 32 Mb huge memory pages.
HugePages32MbRsvd
The number of reserved 32 Mb huge memory pages.
HugePages32MbSurp
The number of surplus 32 Mb huge memory pages.


The error corresponding to LogRotationNotification has been added:
RT_ERR_SIGHUP/6026
An error type has been added to allow better error reporting for loss of connection during a multithreaded operation.
FATAL_ERR_DURING_MT_OP 4011
The following errors, related to internal errors in X509-secured GemStone, have been added:
ERR_REBUILD_SCAVENGABLE/4022
ERR_DEPMAP_FAILURE/4023
ERR_OT_AUGMENT_FAILURE/4024
ERR_COMPOSE_CR_FAILURE/4025


Previously, the HostAgent log file was always written in the directory specified by the startnetldi -D argument, with the name hostagent-stoneName-remoteHost-PIDStoneHost.log.
The startHostAgent script now includes the -l argument. allowing the name and location of the log file to be specified. The patterns described New utility script to validate configuration files can be used.
1. The -l argument to starthostagent
2. GEMSTONE_NRS_ALL in the environment where starthostagent is executed.
3. GEMSTONE_NRS_ALL in the environment where startnetldi was executed, or the startnetldi -X argument NRS
4. The default, hostagent-stoneName-remoteHost-PIDStoneHost.log

On a X509-secured mid-level cache, if the HostAgent that is running on that mid-level cache goes down, but the cache itself is still running, it is possible for the HostAgent to be restarted and reconnect.
To simplify this, a reconnect script is now automatically generated by the Netldi when a cache becomes a mid-level cache, with the name restartMidHostAgent_stoneName.sh. This is a no-argument script that must be manually executed.
This script is written to the directory specified by the startnetldi -D argument.

Now, if the NETLDI_PORT_RANGE and other required configuration parameters are not supplied, the mid-level cache in an X509 configuration will not start up. (#51552)

The following methods have been added:
GemStoneX509Parameters >> clearQuietLogin
GemStoneX509Parameters >> quietLoginFlag


In X509-secured GemStone, if the page lookup on a leaf host missed, the leaf host did not attempt to read from the mid cache. (#51534).

The pusher threads in the HostAgent that warm the mid-level cache did not scan the Stone’s entire cache, resulting in an incompletely warmed cache. (#51558)

If SHR_PAGE_CACHE_LARGE_MEMORY_PAGE_SIZE_MB is not set, and large pages are enabled using SHR_PAGE_CACHE_LARGE_MEMORY_PAGE_POLICY, the default large memory page size on the given host should be used. This was not being done in an x509-Secured GemStone system. (#51423)

It was possible for the HostAgent to create a commit record backlog when the NetLDI on a leaf host failed to respond. The underlying code was not respecting the timeout provided. (#51549)

There is a risk of a SEGV in the HostAgent, when the HostAgent closes the SSL connection to the Gem during periods when the Gem can be receiving InterSessionSignals, or when the HostAgent is forwarding a sigAbort to an x509 Gem. (#51564)

When a thread in the HostAgent had a fatal error, it did not correctly handle the exit to ensure the Stone knew the session had disconnected and release the commit token and other resources. (#51566)

The mid-cache HostAgent did not correctly handle the case where a leaf cache connection was lost. (#51538)

When using -E, the netldi name generated could be malformed, resulting the starthostagent failing to connect to the remote netldi for a leaf host. (#51643)

Gems do not tolerate the lost of a connection to a mid-level cache; if the mid-level cache Host Agent dies, the Gem may error with "lost connection to pgsvr". (#51622)

In an X509-secured GemStone configuration, the Host Agent on a mid-level cache did not properly close its end of the socket to the HostAgent on the Stone’s node for that mid-level cache node. This resulted in running out of file descriptors on the mid-level cache node. (#51619)

When the mid-level cache Host Agent had no available slots and another Gem attempted to connect, it crashed. (#51644).
Now, the connection to the mid-level cache Host Agent will fail, but not affect the HostAgent. The Gem will run without using the mid-level cache and may see performance issues.

When stophostagent is executed shortly after starthostagent, while an X509 remote cache is configured with cache warming and the cache warming is not complete, the warmer gem fails to exit when the remote cache is shut down. (#41829)