Create A New File Store
/management/weblogic/{version}/edit/fileStores
Add a new file store to this collection.
Request
- application/json
-
version(required): string
The version of the WebLogic REST interface.
-
X-Requested-By(required): string
The 'X-Requested-By' header is used to protect against Cross-Site Request Forgery (CSRF) attacks. The value is an arbitrary name such as 'MyClient'.
Must contain a populated file store model.
object
-
blockSize:
integer(int32)
Minimum Value:
-1
Maximum Value:8192
Default Value:-1
The smallest addressable block, in bytes, of a file. When a native
wlfileio
driver is available and the block size has not been configured by the user, the store selects the minimum OS specific value for unbuffered (direct) I/O, if it is within the range [512, 8192].A file store's block size does not change once the file store creates its files. Changes to block size only take effect for new file stores or after the current files have been deleted. See "Tuning the Persistent Store" in Tuning Performance of Oracle WebLogic Server
-
cacheDirectory:
string
Default Value:
oracle.doceng.json.BetterJsonNull@55691a59
The location of the cache directory for
Direct-Write-With-Cache
, ignored for other policies.When
Direct-Write-With-Cache
is specified as theSynchronousWritePolicy
, cache files are created in addition to primary files (see Directory for the location of primary files). If a cache directory location is specified, the cache file path isCacheDirectory/WLStoreCache/StoreNameFileNum.DAT.cache
. When specified, Oracle recommends using absolute paths, but if the directory location is a relative path, thenCacheDirectory
is created relative to the WebLogic Server instance's home directory. If "" orNull
is specified, theCache Directory
is located in the current operating systemtemp
directory as determined by thejava.io.tmpdir
Java System property (JDK's default:/tmp
on UNIX,%TEMP%
on Windows) and isTempDirectory/WLStoreCache/DomainNameunique-idStoreNameFileNum.DAT.cache
. The value ofjava.io.tmpdir
varies between operating systems and configurations, and can be overridden by passing-Djava.io.tmpdir=My_path
on the JVM command line.Considerations:
Security: Some users may want to set specific directory permissions to limit access to the cache directory, especially if there are custom configured user access limitations on the primary directory. For a complete guide to WebLogic security, see "Securing a Production Environment for Oracle WebLogic Server."
Additional Disk Space Usage: Cache files consume the same amount of disk space as the primary store files that they mirror. See Directory for the location of primary store files.
Performance: For the best performance, a cache directory should be located in local storage instead of NAS/SAN (remote) storage, preferably in the operating system's
temp
directory. Relative paths should be avoided, as relative paths are located based on the domain installation, which is typically on remote storage. It is safe to delete a cache directory while the store is not running, but this may slow down the next store boot.Preventing Corruption and File Locking: Two same named stores must not be configured to share the same primary or cache directory. There are store file locking checks that are designed to detect such conflicts and prevent corruption by failing the store boot, but it is not recommended to depend on the file locking feature for correctness. See Enable File Locking
Boot Recovery: Cache files are reused to speed up the File Store boot and recovery process, but only if the store's host WebLogic Server instance has been shut down cleanly prior to the current boot. For example, cache files are not re-used and are instead fully recreated: after a
kill -9
, after an OS or JVM crash, or after an off-line change to the primary files, such as a store admin compaction. When cache files are recreated, aWarning
log message 280102 is generated.Fail-Over/Migration Recovery: A file store safely recovers its data without its cache directory. Therefore, a cache directory does not need to be copied or otherwise made accessible after a fail-over or migration, and similarly does not need to be placed in NAS/SAN storage. A
Warning
log message 280102, which is generated to indicate the need to recreate the cache on the new host system, can be ignored.Cache File Cleanup: To prevent unused cache files from consuming disk space, test and developer environments should periodically delete cache files.
Constraints
- legal null
-
deploymentOrder:
integer(int32)
Minimum Value:
0
Maximum Value:2147483647
Default Value:1000
A priority that the server uses to determine when it deploys an item. The priority is relative to other deployable items of the same type.
For example, the server prioritizes and deploys all EJBs before it prioritizes and deploys startup classes.
Items with the lowest Deployment Order value are deployed first. There is no guarantee on the order of deployments with equal Deployment Order values. There is no guarantee of ordering across clusters.
-
directory:
string
Default Value:
oracle.doceng.json.BetterJsonNull@27bde886
The path name to the file system directory where the file store maintains its data files.
When targeting a file store to a migratable target, the store directory must be accessible from all candidate server members in the migratable target.
For highest availability, use either a SAN (Storage Area Network) or other reliable shared storage.
Use of NFS mounts is discouraged, but supported. Most NFS mounts are not transactionally safe by default, and, to ensure transactional correctness, need to be configured using your NFS vendor documentation in order to honor synchronous write requests.
For
SynchronousWritePolicy
ofDirect-Write-With-Cache
, see Cache Directory.Additional O/S tuning may be required if the directory is hosted by Microsoft Windows, see Synchronous Write Policy for details.
Constraints
- legal null
-
distributionPolicy:
string
Default Value:
Distributed
Allowed Values:[ "Distributed", "Singleton" ]
Specifies how the instances of a configured JMS artifact are named and distributed when cluster-targeted. A JMS artifact is cluster-targeted when its target is directly set to a cluster, or when it is scoped to a resource group and the resource group is in turn targeted to a cluster. When this setting is configured on a store, it applies to all JMS artifacts that reference the store. Valid options:
Distributed
Creates an instance on each server JVM in a cluster. Required for all SAF agents and for cluster-targeted or resource-group-scoped JMS servers that host distributed destinations.Singleton
Creates a single instance on a single server JVM within a cluster. Required for cluster-targeted or resource-group-scoped JMS servers that host standalone (non-distributed) destinations and for cluster-targeted or resource-group-scoped path services. TheMigration Policy
must beOn-Failure
orAlways
when using this option with a JMS server,On-Failure
when using this option with a messaging bridge, andAlways
when using this option with a path service.
Instance Naming Note:
The
DistributionPolicy
determines the instance name suffix for cluster-targeted JMS artifacts. The suffix for a cluster-targetedSingleton
is-01
and for a cluster-targetedDistributed
is@ClusterMemberName
.
Messaging Bridge Notes:
When an instance per server is desired for a cluster-targeted messaging bridge, Oracle recommends setting the bridge
Distributed Policy
andMigration Policy
toDistributed/Off
, respectively; these are the defaults.When a single instance per cluster is desired for a cluster-targeted bridge, Oracle recommends setting the bridge
Distributed Policy
andMigration Policy
toSingleton/On-Failure
, respectively.If you cannot cluster-target a bridge and still need singleton behavior in a configured cluster, you can target the bridge to a migratable target and configure the
Migration Policy
on the migratable target toExactly-Once
.
-
dynamicallyCreated:
boolean
Read Only:
true
Default Value:false
Return whether the MBean was created dynamically or is persisted to config.xml
-
failbackDelaySeconds:
integer(int64)
Default Value:
-1
Specifies the amount of time, in seconds, to delay before failing a cluster-targeted JMS artifact instance back to its preferred server after the preferred server failed and was restarted.
This delay allows time for the system to stabilize and dependent services to be restarted, preventing a system failure during a reboot.
A value >
specifies the time, in seconds, to delay before failing a JMS artifact back to its user preferred server.
A value of
indicates that the instance would never failback.
A value of
-1
indicates that there is no delay and the instance would failback immediately.
Note: This setting only applies when the JMS artifact is cluster-targeted and the Migration Policy is set to
On-Failure
orAlways
-
failOverLimit:
integer(int32)
Minimum Value:
-1
Default Value:-1
Specify a limit for the number of cluster-targeted JMS artifact instances that can fail over to a particular JVM.
This can be used to prevent too many instances from starting on a server, avoiding a system failure when starting too few servers of a formerly large cluster.
A typical limit value should allow all instances to run in the smallest desired cluster size, which means (smallest-cluster-size * (limit + 1)) should equal or exceed the total number of instances.
A value of
-1
means there is no fail over limit (unlimited).A value of
prevents any fail overs of cluster-targeted JMS artifact instances, so no more than 1 instance will run per server (this is an instance that has not failed over).
A value of
allows one fail-over instance on each server, so no more than two instances will run per server (one failed over instance plus an instance that has not failed over).
Note: This setting only applies when the JMS artifact is cluster-targeted and the Migration Policy is set to
On-Failure
orAlways
-
fileLockingEnabled:
boolean
Default Value:
true
Determines whether OS file locking is used.
When file locking protection is enabled, a store boot fails if another store instance already has opened the store files. Do not disable this setting unless you have procedures in place to prevent multiple store instances from opening the same file. File locking is not required but helps prevent corruption in the event that two same-named file store instances attempt to operate in the same directories. This setting applies to both primary and cache files.
-
id:
integer(int64)
Read Only:
true
Return the unique id of this MBean instance
-
initialBootDelaySeconds:
integer(int64)
Default Value:
60
Specifies the amount of time, in seconds, to delay before starting a cluster-targeted JMS instance on a newly booted WebLogic Server instance. When this setting is configured on a store, it applies to all JMS artifacts that reference the store.
This allows time for the system to stabilize and dependent services to be restarted, preventing a system failure during a reboot.
A value >
is the time, in seconds, to delay before before loading resources after a failure and restart.
A value of
specifies no delay.
Note: This setting only applies when the JMS artifact is cluster-targeted and the Migration Policy is set to
On-Failure
orAlways
-
initialSize:
integer(int64)
Minimum Value:
0
Default Value:0
The initial file size, in bytes.
Set
InitialSize
to pre-allocate file space during a file store boot. IfInitialSize
exceedsMaxFileSize
, a store creates multiple files (number of files =InitialSize
MaxFileSize
rounded up).A file store automatically reuses the space from deleted records and automatically expands a file if there is not enough space for a new write request.
Use
InitialSize
to limit or prevent file expansions during runtime, as file expansion introduces temporary latencies that may be noticeable under rare circumstances.Changes to initial size only take effect for new file stores, or after any current files have been deleted prior to restart.
See Maximum File Size
-
ioBufferSize:
integer(int32)
Minimum Value:
-1
Maximum Value:67108864
Default Value:-1
The I/O buffer size, in bytes, automatically rounded down to the nearest power of 2.
For the
Direct-Write-With-Cache
policy when a nativewlfileio
driver is available,IOBufferSize
describes the maximum portion of a cache view that is passed to a system call. This portion does not consume off-heap (native) or Java heap memory.For the
Direct-Write
andCache-Flush
policies,IOBufferSize
is the size of a per store buffer which consumes off-heap (native) memory, where one buffer is allocated during run-time, but multiple buffers may be temporarily created during boot recovery.When a native
wlfileio
driver is not available, the setting applies to off-heap (native) memory for all policies (includingDisabled
).For the best runtime performance, Oracle recommends setting
IOBufferSize
so that it is larger than the largest write (multiple concurrent store requests may be combined into a single write).For the best boot recovery time performance of large stores, Oracle recommends setting
IOBufferSize
to at least 2 megabytes.
See AllocatedIOBufferBytes to find out the actual allocated off-heap (native) memory amount. It is a multiple of
IOBufferSize
for theDirect-Write
andCache-Flush
policies, or zero. -
logicalName:
string
Default Value:
oracle.doceng.json.BetterJsonNull@a15e4fd
The name used by subsystems to refer to different stores on different servers using the same name.
For example, an EJB that uses the timer service may refer to its store using the logical name, and this name may be valid on multiple servers in the same cluster, even if each server has a store with a different physical name.
Multiple stores in the same domain or the same cluster may share the same logical name. However, a given logical name may not be assigned to more than one store on the same server.
Constraints
- legal null
-
maxFileSize:
integer(int64)
Minimum Value:
1048576
Maximum Value:2139095040
Default Value:1342177280
The maximum file size, in bytes, of an individual data file.
The
MaxFileSize
value affects the number of files needed to accommodate a store of a particular size (number of files = store size/MaxFileSize rounded up).A file store automatically reuses space freed by deleted records and automatically expands individual files up to
MaxFileSize
if there is not enough space for a new record. If there is no space left in exiting files for a new record, a store creates an additional file.A small number of larger files is normally preferred over a large number of smaller files as each file allocates Window Buffer and file handles.
If
MaxFileSize
is larger than 2^24 *BlockSize
, thenMaxFileSize
is ignored, and the value becomes 2^24 *BlockSize
. The defaultBlockSize
is 512, and 2^24 * 512 is 8 GB.The minimum size for
MaxFileSize
is 10 MB when multiple data files are used by the store. IfInitialSize
is less thanMaxFileSize
then a single file will be created ofInitialSize
bytes. IfInitialSize
is larger thanMaxFileSize
then (InitialSize
/MaxFileSize
) files will be created ofMaxFileSize
bytes and an additional file if necessary to contain any remainder.See Initial Size
-
maxWindowBufferSize:
integer(int32)
Minimum Value:
-1
Maximum Value:1073741824
Default Value:-1
The maximum amount of data, in bytes and rounded down to the nearest power of 2, mapped into the JVM's address space per primary store file. Applies to synchronous write policies
Direct-Write-With-Cache
andDisabled
but only when the nativewlfileio
library is loaded.A window buffer does not consume Java heap memory, but does consume off-heap (native) memory. If the store is unable to allocate the requested buffer size, it allocates smaller and smaller buffers until it reaches
MinWindowBufferSize
, and then fails if cannot honorMinWindowBufferSize
Oracle recommends setting the max window buffer size to more than double the size of the largest write (multiple concurrently updated records may be combined into a single write), and greater than or equal to the file size, unless there are other constraints. 32-bit JVMs may impose a total limit of between 2 and 4GB for combined Java heap plus off-heap (native) memory usage.
See store attribute
AllocatedWindowBufferBytes
to find out the actual allocated Window Buffer Size.See Maximum File Size and Minimum Window Buffer Size
-
migrationPolicy:
string
Default Value:
Off
Allowed Values:[ "Off", "On-Failure", "Always" ]
Controls migration and restart behavior of cluster-targeted JMS service artifact instances. When this setting is configured on a cluster-targeted store, it applies to all JMS artifacts that reference the store. See the migratable target settings for enabling migration and restart on migratable-targeted JMS artifacts.
Off
Disables migration support for cluster-targeted JMS service objects, and changes the default for Restart In Place to false. If you want a restart to be enabled when the Migration Policy is Off, then Restart In Place must be explicitly configured to true. This policy cannot be combined with theSingleton
Migration Policy.On-Failure
Enables automatic migration and restart of instances on the failure of a subsystem Service or WebLogic Server instance, including automatic fail-back and load balancing of instances.Always
Provides the same behavior asOn-Failure
and automatically migrates instances even in the event of a graceful shutdown or a partial cluster start.
Note: Cluster leasing must be configured for
On-Failure
andAlways
.Messaging Bridge Notes:
When an instance per server is desired for a cluster-targeted messaging bridge, Oracle recommends setting the bridge
Distributed Policy
andMigration Policy
toDistributed/Off
, respectively; these are the defaults.When a single instance per cluster is desired for a cluster-targeted bridge, Oracle recommends setting the bridge
Distributed Policy
andMigration Policy
toSingleton/On-Failure
, respectively.A
Migration Policy
ofAlways
is not recommended for bridges.If you cannot cluster-target a bridge and still need singleton behavior in a configured cluster, you can target the bridge to a migratable target and configure the
Migration Policy
on the migratable target toExactly-Once
.
-
minWindowBufferSize:
integer(int32)
Minimum Value:
-1
Maximum Value:1073741824
Default Value:-1
The minimum amount of data, in bytes and rounded down to the nearest power of 2, mapped into the JVM's address space per primary store file. Applies to synchronous write policies
Direct-Write-With-Cache
andDisabled
, but only when a nativewlfileio
library is loaded. See Maximum Window Buffer Size -
name:
string
Read Only:
true
The user-specified name of this MBean instance.
This name is included as one of the key properties in the MBean's
javax.management.ObjectName
Name=user-specified-name
Constraints
- legal null
-
notes:
string
Optional information that you can include to describe this configuration.
WebLogic Server saves this note in the domain's configuration file (
config.xml
) as XML PCDATA. All left angle brackets (<) are converted to the xml entity<. Carriage returns/line feeds are preserved.)>
Note: If you create or edit a note from the Administration Console, the Administration Console does not preserve carriage returns/line feeds.
-
numberOfRestartAttempts:
integer(int32)
Minimum Value:
-1
Default Value:6
Specifies the maximum number of restart attempts.
A value >
specifies the maximum number of restart attempts.
A value of
specifies the same behavior as setting getRestartInPlace to
false
A value of
-1
means infinite retry restart until it either starts or the server instance shuts down.
-
partialClusterStabilityDelaySeconds:
integer(int64)
Default Value:
240
Specifies the amount of time, in seconds, to delay before a partially started cluster starts all cluster-targeted JMS artifact instances that are configured with a Migration Policy of
Always
orOn-Failure
.Before this timeout expires or all servers are running, a cluster starts a subset of such instances based on the total number of servers running and the configured cluster size. Once the timeout expires or all servers have started, the system considers the cluster stable and starts any remaining services.
This delay ensures that services are balanced across a cluster even if the servers are started sequentially. It is ignored after a cluster is fully started (stable) or when individual servers are started.
A value >
specifies the time, in seconds, to delay before a partially started cluster starts dynamically configured services.
A value of
specifies no delay.
-
rebalanceEnabled:
boolean
Default Value:
false
If set to true, then rebalance running cluster-targeted JMS instances when the system is idle and instances are unevenly distributed.
The system is considered idle when the Partial Cluster Stability Delay and Initial Boot Delay have passed, and no instances have moved plus no server status has changed within the last two system check periods (typically 10 seconds between each check). Two is the default, you can tune this higher using Rebalance Delay Periods on the Cluster bean.
The system is considered unbalanced if any running server has a JMS instance count that is more than one higher than the instance count on any other running server.
The rebalance heuristic forces all running instances that are not on their preferred server to move to their preferred server if the preferred server is running. It then finds the alphanumerically highest failed-over instance on the running server with the most instances, moves this instance to the alphanumerically least most running server with the fewest failed-over instances, and repeats this pattern until there are no running servers with an instance count that is more than one higher than the instance count on any other running server.
Note: This setting only applies when the JMS artifact is cluster-targeted, its Distribution Policy is set to
Distributed
, and its Migration Policy is set toOn-Failure
orAlways
-
restartInPlace:
boolean
Enables a periodic automatic in-place restart of failed cluster-targeted or standalone-server-targeted JMS artifact instance(s) running on healthy WebLogic Server instances. See the migratable target settings for in-place restarts of migratable-targeted JMS artifacts. When the Restart In Place setting is configured on a store, it applies to all JMS artifacts that reference the store.
If the Migration Policy of the JMS artifact is set to
Off
, Restart In Place is disabled by default.If the Migration Policy of the JMS artifact is set to
On-Failure
orAlways
, Restart In Place is enabled by default.This attribute is not used by WebLogic messaging bridges which automatically restart internal connections as needed.
For a JMS artifact that is cluster-targeted and the Migration Policy is set to
On-Failure
orAlways
, if restart fails after the configured maximum retry attempts, it will migrate to a different server within the cluster.
-
secondsBetweenRestarts:
integer(int32)
Minimum Value:
1
Default Value:30
Specifies the amount of time, in seconds, to wait in between attempts to restart a failed service instance.
-
synchronousWritePolicy:
string
Default Value:
Direct-Write
Allowed Values:[ "Disabled", "Cache-Flush", "Direct-Write", "Direct-Write-With-Cache" ]
The disk write policy that determines how the file store writes data to disk.
This policy also affects the JMS file store's performance, scalability, and reliability. Oracle recommends
Direct-Write-With-Cache
which tends to have the highest performance. The default value isDirect-Write
. The valid policy options are:Direct-Write
Direct I/O is supported on all platforms. When available, file stores in direct I/O mode automatically load the native I/Owlfileio
driver. This option tends to out-performCache-Flush
and tend to be slower thanDirect-Write-With-Cache
. This mode does not require a native storewlfileio
driver, but performs faster when they are available.Direct-Write-With-Cache
Store records are written synchronously to primary files in the directory specified by theDirectory
attribute and asynchronously to a corresponding cache file in theCache Directory
. TheCache Directory
provides information about disk space, locking, security, and performance implications. This mode requires a native storewlfileiocode
driver. If the native driver cannot be loaded, then the write mode automatically switches toDirect-Write
. See Cache DirectoryCache-Flush
Transactions cannot complete until all of their writes have been flushed down to disk. This policy is reliable and scales well as the number of simultaneous users increases.Transactionally safe but tends to be a lower performer than direct-write policies.Disabled
Transactions are complete as soon as their writes are cached in memory, instead of waiting for the writes to successfully reach the disk. This is the fastest policy because write requests do not block waiting to be synchronized to disk, but, unlike other policies, is not transactionally safe in the event of operating system or hardware failures. Such failures can lead to duplicate or lost data/messages. This option does not require native storewlfileio
drivers, but may run faster when they are available. Some non-WebLogic JMS vendors default to a policy that is equivalent toDisabled
Notes:
When available, file stores load WebLogic
wlfileio
native drivers, which can improve performance. These drivers are included with Windows, Solaris, Linux, and AIX WebLogic installations.Certain older versions of Microsoft Windows may incorrectly report storage device synchronous write completion if the Windows default
Write Cache Enabled
setting is used. This violates the transactional semantics of transactional products (not specific to Oracle), including file stores configured with aDirect-Write
(default) orDirect-Write-With-Cache
policy, as a system crash or power failure can lead to a loss or a duplication of records/messages. One of the visible symptoms is that this problem may manifest itself in high persistent message/transaction throughput exceeding the physical capabilities of your storage device. You can address the problem by applying a Microsoft supplied patch, disabling the WindowsWrite Cache Enabled
setting, or by using a power-protected storage device. See http://support.microsoft.com/kb/281672 and http://support.microsoft.com/kb/332023.NFS storage note: On some operating systems, native driver memory-mapping is incompatible with NFS when files are locked. Stores with synchronous write policies
Direct-Write-With-Cache
or Disabled, and WebLogic JMS paging stores enhance performance by using the nativewlfileio
driver to perform memory-map operating system calls. When a store detects an incompatibility between NFS, file locking, and memory mapping, it automatically downgrades to conventional read/write system calls instead of memory mapping. For best performance, Oracle recommends investigating alternative NFS client drivers, configuring a non-NFS storage location, or in controlled environments and at your own risk, disabling the file locks (See Enable File Locking). For more information, see "Tuning the WebLogic Persistent Store" in Tuning Performance of Oracle WebLogic Server
-
tags:
array Items
Title:
Items
Return all tags on this Configuration MBean
-
targets:
array Target References
Title:
Target References
Contains the array of target references.The server instances, clusters, or migratable targets defined in the current domain that are candidates for hosting a file store, JDBC store, or replicated store. If scoped to a Resource Group or Resource Group Template, the target is inherited from the Virtual Target.
When selecting a cluster, the store must be targeted to the same cluster as the JMS server. When selecting a migratable target, the store must be targeted it to the same migratable target as the migratable JMS server or SAF agent. As a best practice, a path service should use its own custom store and share the same target as the store.
-
type:
string
Read Only:
true
Returns the type of the MBean.
Constraints
- unharvestable
-
XAResourceName:
string
Read Only:
true
Default Value:oracle.doceng.json.BetterJsonNull@57dee2b9
Overrides the name of the XAResource that this store registers with JTA.
You should not normally set this attribute. Its purpose is to allow the name of the XAResource to be overridden when a store has been upgraded from an older release and the store contained prepared transactions. The generated name should be used in all other cases.
Constraints
- legal null
array
Target References
The server instances, clusters, or migratable targets defined in the current domain that are candidates for hosting a file store, JDBC store, or replicated store. If scoped to a Resource Group or Resource Group Template, the target is inherited from the Virtual Target.
When selecting a cluster, the store must be targeted to the same cluster as the JMS server. When selecting a migratable target, the store must be targeted it to the same migratable target as the migratable JMS server or SAF agent. As a best practice, a path service should use its own custom store and share the same target as the store.
-
Array of:
object Target Reference
Title:
Target Reference
Contains the target reference.
object
Target Reference
-
identity:
array Identity
Title:
Identity
DOC TEAM TBD - describe an identity - it's a reference to another WLS REST resource.
array
Identity
-
Admin: basic
Type:
basic
Description:A user in the Admin security role.
-
Deployer: basic
Type:
basic
Description:A user in the Deployer security role.