Interface NamedCache<K,V>

Type Parameters:
K - the type of the cache entry keys
V - the type of the cache entry values
All Superinterfaces:
AutoCloseable, CacheMap<K,V>, ConcurrentMap<K,V>, InvocableMap<K,V>, Map<K,V>, NamedCollection, NamedMap<K,V>, ObservableMap<K,V>, QueryMap<K,V>, Releasable
All Known Implementing Classes:
BundlingNamedCache, ContinuousQueryCache, ConverterCollections.ConverterNamedCache, NearCache, ReadonlyNamedCache, VersionedNearCache, WrapperNamedCache

public interface NamedCache<K,V> extends NamedMap<K,V>, CacheMap<K,V>
A Map-based data-structure that manages entries across one or more processes. Entries are typically managed in memory, and are often comprised of data that is also stored in an external system, for example a database, or data that has been assembled or calculated at some significant cost. Such entries are referred to as being cached.
Since:
Coherence 1.1.2
Author:
gg 2002.03.27
  • Method Details

    • getCacheName

      String getCacheName()
      Return the cache name.
      Returns:
      the cache name
    • getCacheService

      CacheService getCacheService()
      Return the CacheService that this NamedCache is a part of.
      Returns:
      the CacheService
    • put

      V put(K key, V value, long cMillis)
      Associates the specified value with the specified key in this cache and allows to specify an expiry for the cache entry.

      Note: Though NamedCache interface extends CacheMap, not all implementations currently support this functionality.

      For example, if a cache is configured to be a replicated, optimistic or distributed cache then its backing map must be configured as a local cache. If a cache is configured to be a near cache then the front map must to be configured as a local cache and the back map must support this feature as well, typically by being a distributed cache backed by a local cache (as above.)

      Specified by:
      put in interface CacheMap<K,V>
      Parameters:
      key - key with which the specified value is to be associated
      value - value to be associated with the specified key
      cMillis - the number of milliseconds until the cache entry will expire, also referred to as the entry's "time to live"; pass CacheMap.EXPIRY_DEFAULT to use the cache's default time-to-live setting; pass CacheMap.EXPIRY_NEVER to indicate that the cache entry should never expire; this milliseconds value is not a date/time value, such as is returned from System.currentTimeMillis()
      Returns:
      previous value associated with specified key, or null if there was no mapping for key. A null return can also indicate that the map previously associated null with the specified key, if the implementation supports null values
      Throws:
      UnsupportedOperationException - if the requested expiry is a positive value and the implementation does not support expiry of cache entries
      Since:
      Coherence 2.3
    • forEach

      default void forEach(Collection<? extends K> collKeys, BiConsumer<? super K,? super V> action)
      Perform the given action for each entry selected by the specified key set until all entries have been processed or the action throws an exception.

      Exceptions thrown by the action are relayed to the caller.

      The implementation processes each entry on the client and should only be used for read-only client-side operations (such as adding cache entries to a UI widget, for example).

      Any entry mutation caused by the specified action will not be propagated to the server when this method is called on a distributed cache, so it should be avoided. The mutating operations on a subset of entries should be implemented using one of InvocableMap.invokeAll(com.tangosol.util.InvocableMap.EntryProcessor<K, V, R>), Map.replaceAll(java.util.function.BiFunction<? super K, ? super V, ? extends V>), Map.compute(K, java.util.function.BiFunction<? super K, ? super V, ? extends V>), or Map.merge(K, V, java.util.function.BiFunction<? super V, ? super V, ? extends V>) methods instead.

      Specified by:
      forEach in interface CacheMap<K,V>
      Specified by:
      forEach in interface NamedMap<K,V>
      Parameters:
      collKeys - the keys to process; these keys are not required to exist within the Map
      action - the action to be performed for each entry
      Since:
      12.2.1
    • as

      default <C extends NamedCache<K, V>> C as(Class<C> clzNamedCache)
      Request a specific type of reference to a NamedCache that this NamedCache may additionally implement or support.
      Type Parameters:
      C - the type of NamedCache
      Parameters:
      clzNamedCache - the class of NamedCache
      Returns:
      a NamedCache of the requested type
      Throws:
      UnsupportedOperationException - when this NamedCache doesn't support or implement the requested class
    • async

      default AsyncNamedCache<K,V> async()
      Return an asynchronous wrapper for this NamedCache.

      By default, the order of execution of asynchronous operation invoked on the returned AsyncNamedCache will be preserved by ensuring that all operations invoked from the same client thread are executed on the server sequentially, using the same unit-of-order. This tends to provide best performance for fast, non-blocking operations.

      However, when invoking CPU-intensive or blocking operations, such as read- or write-through operations that access remote database or web service, for example, it may be very beneficial to allow the server to parallelize execution by passing AsyncNamedMap.OrderBy.none() configuration option to the async(AsyncNamedCache.Option...) method. Note, that in that case there are no guarantees for the order of execution.

      Returns:
      asynchronous wrapper for this NamedCache
    • async

      default AsyncNamedCache<K,V> async(AsyncNamedMap.Option... options)
      Return an asynchronous wrapper for this NamedCache.

      By default, the order of execution of asynchronous operation invoked on the returned AsyncNamedCache will be preserved by ensuring that all operations invoked from the same client thread are executed on the server sequentially, using the same unit-of-order. This tends to provide the best performance for fast, non-blocking operations.

      However, when invoking CPU-intensive or blocking operations, such as read- or write-through operations that access remote database or web service, for example, it may be very beneficial to allow the server to parallelize execution by passing AsyncNamedMap.OrderBy.none() configuration option to this method. Note, that in that case there are no guarantees for the order of execution.

      Specified by:
      async in interface NamedMap<K,V>
      Parameters:
      options - the configuration options
      Returns:
      asynchronous wrapper for this NamedCache
    • view

      default ViewBuilder<K,V> view()
      Construct a view of this NamedCache.
      Specified by:
      view in interface NamedMap<K,V>
      Returns:
      a local view for this NamedCache
      Since:
      12.2.1.4
      See Also: