Java/Collections Data Structure/Concurrent — различия между версиями

Материал из Java эксперт
Перейти к: навигация, поиск
 
м (1 версия)
 
(нет различий)

Текущая версия на 10:24, 1 июня 2010

A version of Hashtable supporting concurrency for both retrievals and updates

   <source lang="java">
 

/*

File: ConcurrentHashMap
Written by Doug Lea. Adapted and released, under explicit
permission, from JDK1.2 HashMap.java and Hashtable.java which
carries the following copyright:
* Copyright 1997 by Sun Microsystems, Inc.,
* 901 San Antonio Road, Palo Alto, California, 94303, U.S.A.
* All rights reserved.
*
* This software is the confidential and proprietary information
* of Sun Microsystems, Inc. ("Confidential Information").  You
* shall not disclose such Confidential Information and shall use
* it only in accordance with the terms of the license agreement
* you entered into with Sun.
History:
Date       Who                What
26nov2000  dl               Created, based on ConcurrentReaderHashMap
12jan2001  dl               public release
17nov2001  dl               Minor tunings
24oct2003  dl               Segment implements Serializable
*/

import java.io.IOException; import java.io.Serializable; import java.util.AbstractCollection; import java.util.AbstractMap; import java.util.AbstractSet; import java.util.Collection; import java.util.Enumeration; import java.util.Iterator; import java.util.Map; import java.util.NoSuchElementException; import java.util.Set;

/**

* A version of Hashtable supporting concurrency for both retrievals and
* updates:
* 
*
*
Retrievals * *
Retrievals may overlap updates. (This is the same policy as * ConcurrentReaderHashMap.) Successful retrievals using get(key) and * containsKey(key) usually run without locking. Unsuccessful retrievals (i.e., * when the key is not present) do involve brief synchronization (locking). * Because retrieval operations can ordinarily overlap with update operations * (i.e., put, remove, and their derivatives), retrievals can only be guaranteed * to return the results of the most recently completed operations * holding upon their onset. Retrieval operations may or may not return results * reflecting in-progress writing operations. However, the retrieval operations * do always return consistent results -- either those holding before any single * modification or after it, but never a nonsense result. For aggregate * operations such as putAll and clear, concurrent reads may reflect insertion * or removal of only some entries. *

* * Iterators and Enumerations (i.e., those returned by keySet().iterator(), * entrySet().iterator(), values().iterator(), keys(), and elements()) return * elements reflecting the state of the hash table at some point at or since the * creation of the iterator/enumeration. They will return at most one instance * of each element (via next()/nextElement()), but might or might not reflect * puts and removes that have been processed since they were created. They do * not throw ConcurrentModificationException. However, these * iterators are designed to be used by only one thread at a time. Passing an * iterator across multiple threads may lead to unpredictable results if the * table is being concurrently modified. * <p> * * *

Updates * *
This class supports a hard-wired preset concurrency * level of * 32. This allows a maximum of 32 put and/or remove operations to proceed * concurrently. This level is an upper bound on concurrency, not a guarantee, * since it interacts with how well-strewn elements are across bins of the * table. (The preset value in part reflects the fact that even on large * multiprocessors, factors other than synchronization tend to be bottlenecks * when more than 32 threads concurrently attempt updates.) Additionally, * operations triggering internal resizing and clearing do not execute * concurrently with any operation. * <p> * * There is NOT any support for locking the entire table to prevent * updates. This makes it imposssible, for example, to add an element only if it * is not already present, since another thread may be in the process of doing * the same thing. If you need such capabilities, consider instead using the * ConcurrentReaderHashMap class. * * </dl> * * Because of how concurrency control is split up, the size() and isEmpty() * methods require accumulations across 32 control segments, and so might be * slightly slower than you expect. *

* * This class may be used as a direct replacement for java.util.Hashtable in any * application that does not rely on the ability to lock the entire table to * prevent updates. As of this writing, it performs much faster than Hashtable * in typical multi-threaded applications with multiple readers and writers. * Like Hashtable but unlike java.util.HashMap, this class does NOT allow * null to be used as a key or value. * <p> * * Implementation note: A slightly faster implementation of this class will be * possible once planned Java Memory Model revisions are in place. * * <p>[] * */ public class ConcurrentHashMap extends AbstractMap implements Map, Cloneable, Serializable { private static final long serialVersionUID = 1L; /* * The basic strategy is an optimistic-style scheme based on the guarantee * that the hash table and its lists are always kept in a consistent enough * state to be read without locking: * * Read operations first proceed without locking, by traversing the * apparently correct list of the apparently correct bin. If an entry is * found, but not invalidated (value field null), it is returned. If not * found, operations must recheck (after a memory barrier) to make sure they * are using both the right list and the right table (which can change under * resizes). If invalidated, reads must acquire main update lock to wait out * the update, and then re-traverse. * * All list additions are at the front of each bin, making it easy to check * changes, and also fast to traverse. Entry next pointers are never * assigned. Remove() builds new nodes when necessary to preserve this. * * Remove() (also clear()) invalidates removed nodes to alert read * operations that they must wait out the full modifications. * * Locking for puts, removes (and, when necessary gets, etc) is controlled * by Segments, each covering a portion of the table. During operations * requiring global exclusivity (mainly resize and clear), ALL of these * locks are acquired at once. Note that these segments are NOT contiguous -- * they are based on the least 5 bits of hashcodes. This ensures that the * same segment controls the same slots before and after resizing, which is * necessary for supporting concurrent retrievals. This comes at the price * of a mismatch of logical vs physical locality, but this seems not to be a * performance problem in practice. * */ /** * The hash table data. */ protected transient Entry[] table; /** * The number of concurrency control segments. The value can be at most 32 * since ints are used as bitsets over segments. Emprically, it doesn"t seem * to pay to decrease it either, so the value should be at least 32. In * other words, do not redefine this :-) */ protected static final int CONCURRENCY_LEVEL = 32; /** * Mask value for indexing into segments */ protected static final int SEGMENT_MASK = CONCURRENCY_LEVEL - 1; /** * Bookkeeping for each concurrency control segment. Each segment contains a * local count of the number of elements in its region. However, the main * use of a Segment is for its lock. */ protected final static class Segment implements Serializable { private static final long serialVersionUID = 1L; /** * The number of elements in this segment"s region. It is always updated * within synchronized blocks. */ protected int count; /** * Get the count under synch. * @return count under sync */ protected synchronized int getCount() { return count; } /** * Force a synchronization */ protected synchronized void synch() { } } /** * The array of concurrency control segments. */ protected final Segment[] segments = new Segment[CONCURRENCY_LEVEL]; /** * The default initial number of table slots for this table (32). Used when * not otherwise specified in constructor. */ public static final int DEFAULT_INITIAL_CAPACITY = 32; /** * The minimum capacity, used if a lower value is implicitly specified by * either of the constructors with arguments. MUST be a power of two. */ private static final int MINIMUM_CAPACITY = 32; /** * The maximum capacity, used if a higher value is implicitly specified by * either of the constructors with arguments. MUST be a power of two <= 1<<30. */ private static final int MAXIMUM_CAPACITY = 1 << 30; /** * The default load factor for this table (0.75) Used when not otherwise * specified in constructor. */ public static final float DEFAULT_LOAD_FACTOR = 0.75f; /** * The load factor for the hash table. * * @serial */ protected final float loadFactor; /** * Per-segment resize threshold. * * @serial */ protected int threshold; /** * Number of segments voting for resize. The table is doubled when 1/4 of * the segments reach threshold. Volatile but updated without synch since * this is just a heuristic. */ protected transient volatile int votesForResize; /** * Return the number of set bits in w. For a derivation of this algorithm, * see "Algorithms and data structures with applications to graphics and * geometry", by Jurg Nievergelt and Klaus Hinrichs, Prentice Hall, 1993. * See also notes by Torsten Sillke at * http://www.mathematik.uni-bielefeld.de/~sillke/PROBLEMS/bitcount * @param w arg * @return number of set bits */ protected static int bitcount(int w) { w -= (0xaaaaaaaa & w) >>> 1; w = (w & 0x33333333) + ((w >>> 2) & 0x33333333); w = (w + (w >>> 4)) & 0x0f0f0f0f; w += w >>> 8; w += w >>> 16; return w & 0xff; } /** * Returns the appropriate capacity (power of two) for the specified initial * capacity argument. * @param initialCapacity the initial capacity * @return appropriate capacity */ private int p2capacity(int initialCapacity) { int cap = initialCapacity; // Compute the appropriate capacity int result; if (cap > MAXIMUM_CAPACITY || cap < 0) { result = MAXIMUM_CAPACITY; } else { result = MINIMUM_CAPACITY; while (result < cap) { result <<= 1; } } return result; } /** * Return hash code for Object x. Since we are using power-of-two tables, it * is worth the effort to improve hashcode via the same multiplicative * scheme as used in IdentityHashMap. * @param x * @return hash code */ protected static int hash(Object x) { int h = x.hashCode(); // Multiply by 127 (quickly, via shifts), and mix in some high // bits to help guard against bunching of codes that are // consecutive or equally spaced. return ((h << 7) - h + (h >>> 9) + (h >>> 17)); } /** * Check for equality of non-null references x and y. * @param x ref * @param y ref * @return is equal */ protected boolean eq(Object x, Object y) { return x == y || x.equals(y); } /** * Create table array and set the per-segment threshold * * @param capacity * @return table array */ protected Entry[] newTable(int capacity) { threshold = (int)(capacity * loadFactor / CONCURRENCY_LEVEL) + 1; return new Entry[capacity]; } /** * Constructs a new, empty map with the specified initial capacity and the * specified load factor. * * @param initialCapacity * the initial capacity. The actual initial capacity is rounded * to the nearest power of two. * @param loadFactor * the load factor threshold, used to control resizing. This * value is used in an approximate way: When at least a quarter * of the segments of the table reach per-segment threshold, or * one of the segments itself exceeds overall threshold, the * table is doubled. This will on average cause resizing when the * table-wide load factor is slightly less than the threshold. If * you"d like to avoid resizing, you can set this to a * ridiculously large value. * @throws IllegalArgumentException * if the load factor is nonpositive. */ public ConcurrentHashMap(int initialCapacity, float loadFactor) { if (!(loadFactor > 0)) { throw new IllegalArgumentException("Illegal Load factor: " + loadFactor); } this.loadFactor = loadFactor; for (int i = 0; i < segments.length; ++i) { segments[i] = new Segment(); } int cap = p2capacity(initialCapacity); table = newTable(cap); } /** * Constructs a new, empty map with the specified initial capacity and * default load factor. * * @param initialCapacity * the initial capacity of the ConcurrentHashMap. * @throws IllegalArgumentException * if the initial maximum number of elements is less than zero. */ public ConcurrentHashMap(int initialCapacity) { this(initialCapacity, DEFAULT_LOAD_FACTOR); } /** * Constructs a new, empty map with a default initial capacity and default * load factor. */ public ConcurrentHashMap() { this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR); } /** * Constructs a new map with the same mappings as the given map. The map is * created with a capacity of twice the number of mappings in the given map * or 32 (whichever is greater), and a default load factor. * @param t map to copy */ public ConcurrentHashMap(Map t) { this(Math.max((int)(t.size() / DEFAULT_LOAD_FACTOR) + 1, MINIMUM_CAPACITY), DEFAULT_LOAD_FACTOR); putAll(t); } /** * Returns the number of key-value mappings in this map. * * @return the number of key-value mappings in this map. */ public int size() { int c = 0; for (int i = 0; i < segments.length; ++i) { c += segments[i].getCount(); } return c; } /** * Returns true if this map contains no key-value mappings. * * @return true if this map contains no key-value mappings. */ public boolean isEmpty() { for (int i = 0; i < segments.length; ++i) { if (segments[i].getCount() != 0) { return false; } } return true; } /** * Returns the value to which the specified key is mapped in this table. * * @param key * a key in the table. * @return the value to which the key is mapped in this table; * null if the key is not mapped to any value in this * table. * @exception NullPointerException * if the key is null. * @see #put(Object, Object) */ public Object get(Object key) { int hash = hash(key); // throws null pointer exception if key null // Try first without locking... Entry[] tab = table; int index = hash & (tab.length - 1); Entry first = tab[index]; Entry e; for (e = first; e != null; e = e.next) { if (e.hash == hash && eq(key, e.key)) { Object value = e.value; if (value != null) { return value; } else { break; } } } // Recheck under synch if key apparently not there or interference Segment seg = segments[hash & SEGMENT_MASK]; synchronized (seg) { tab = table; index = hash & (tab.length - 1); Entry newFirst = tab[index]; if (e != null || first != newFirst) { for (e = newFirst; e != null; e = e.next) { if (e.hash == hash && eq(key, e.key)) { return e.value; } } } return null; } } /** * Tests if the specified object is a key in this table. * * @param key * possible key. * @return true if and only if the specified object is a key * in this table, as determined by the equals method; * false otherwise. * @exception NullPointerException * if the key is null. * @see #contains(Object) */ public boolean containsKey(Object key) { return get(key) != null; } /** * Maps the specified key to the specified value * in this table. Neither the key nor the value can be null. * (Note that this policy is the same as for java.util.Hashtable, but unlike * java.util.HashMap, which does accept nulls as valid keys and values.) * <p> * * The value can be retrieved by calling the get method with * a key that is equal to the original key. * * @param key * the table key. * @param value * the value. * @return the previous value of the specified key in this table, or * null if it did not have one. * @exception NullPointerException * if the key or value is null. * @see Object#equals(Object) * @see #get(Object) */ public Object put(Object key, Object value) { if (value == null) { throw new IllegalArgumentException("Value must not be null"); } int hash = hash(key); Segment seg = segments[hash & SEGMENT_MASK]; int segcount; Entry[] tab; int votes; synchronized (seg) { tab = table; int index = hash & (tab.length - 1); Entry first = tab[index]; for (Entry e = first; e != null; e = e.next) { if (e.hash == hash && eq(key, e.key)) { Object oldValue = e.value; e.value = value; return oldValue; } } // Add to front of list Entry newEntry = new Entry(hash, key, value, first); tab[index] = newEntry; if ((segcount = ++seg.count) < threshold) { return null; } int bit = (1 << (hash & SEGMENT_MASK)); votes = votesForResize; if ((votes & bit) == 0) { votes = votesForResize |= bit; } } // Attempt resize if 1/4 segs vote, // or if this seg itself reaches the overall threshold. // (The latter check is just a safeguard to avoid pathological cases.) if (bitcount(votes) >= CONCURRENCY_LEVEL / 4 || segcount > (threshold * CONCURRENCY_LEVEL)) { resize(0, tab); } return null; } /** * Gather all locks in order to call rehash, by recursing within synch * blocks for each segment index. * * @param index * the current segment. initially call value must be 0 * @param assumedTab * the state of table on first call to resize. If this changes on * any call, the attempt is aborted because the table has already * been resized by another thread. */ protected void resize(int index, Entry[] assumedTab) { Segment seg = segments[index]; synchronized (seg) { if (assumedTab == table) { int next = index + 1; if (next < segments.length) { resize(next, assumedTab); } else { rehash(); } } } } /** * Rehashes the contents of this map into a new table with a larger * capacity. */ protected void rehash() { votesForResize = 0; // reset Entry[] oldTable = table; int oldCapacity = oldTable.length; if (oldCapacity >= MAXIMUM_CAPACITY) { threshold = Integer.MAX_VALUE; // avoid retriggering return; } int newCapacity = oldCapacity << 1; Entry[] newTable = newTable(newCapacity); int mask = newCapacity - 1; /* * Reclassify nodes in each list to new Map. Because we are using * power-of-two expansion, the elements from each bin must either stay * at same index, or move to oldCapacity+index. We also eliminate * unnecessary node creation by catching cases where old nodes can be * reused because their next fields won"t change. Statistically, at the * default threshhold, only about one-sixth of them need cloning. (The * nodes they replace will be garbage collectable as soon as they are no * longer referenced by any reader thread that may be in the midst of * traversing table right now.) */ for (int i = 0; i < oldCapacity; i++) { // We need to guarantee that any existing reads of old Map can // proceed. So we cannot yet null out each bin. Entry e = oldTable[i]; if (e != null) { int idx = e.hash & mask; Entry next = e.next; // Single node on list if (next == null) { newTable[idx] = e; } else { // Reuse trailing consecutive sequence of all same bit Entry lastRun = e; int lastIdx = idx; for (Entry last = next; last != null; last = last.next) { int k = last.hash & mask; if (k != lastIdx) { lastIdx = k; lastRun = last; } } newTable[lastIdx] = lastRun; // Clone all remaining nodes for (Entry p = e; p != lastRun; p = p.next) { int k = p.hash & mask; newTable[k] = new Entry(p.hash, p.key, p.value, newTable[k]); } } } } table = newTable; } /** * Removes the key (and its corresponding value) from this table. This * method does nothing if the key is not in the table. * * @param key * the key that needs to be removed. * @return the value to which the key had been mapped in this table, or * null if the key did not have a mapping. * @exception NullPointerException * if the key is null. */ public Object remove(Object key) { return remove(key, null); } /** * Removes the (key, value) pair from this table. This method does nothing * if the key is not in the table, or if the key is associated with a * different value. This method is needed by EntrySet. * * @param key * the key that needs to be removed. * @param value * the associated value. If the value is null, it means "any * value". * @return the value to which the key had been mapped in this table, or * null if the key did not have a mapping. * @exception NullPointerException * if the key is null. */ protected Object remove(Object key, Object value) { /* * Find the entry, then 1. Set value field to null, to force get() to * retry 2. Rebuild the list without this entry. All entries following * removed node can stay in list, but all preceeding ones need to be * cloned. Traversals rely on this strategy to ensure that elements will * not be repeated during iteration. */ int hash = hash(key); Segment seg = segments[hash & SEGMENT_MASK]; synchronized (seg) { Entry[] tab = table; int index = hash & (tab.length - 1); Entry first = tab[index]; Entry e = first; for (;;) { if (e == null) { return null; } if (e.hash == hash && eq(key, e.key)) { break; } e = e.next; } Object oldValue = e.value; if (value != null && !value.equals(oldValue)) { return null; } e.value = null; Entry head = e.next; for (Entry p = first; p != e; p = p.next) { head = new Entry(p.hash, p.key, p.value, head); } tab[index] = head; seg.count--; return oldValue; } } /** * Returns true if this map maps one or more keys to the * specified value. Note: This method requires a full internal traversal of * the hash table, and so is much slower than method containsKey. * * @param value * value whose presence in this map is to be tested. * @return true if this map maps one or more keys to the * specified value. * @exception NullPointerException * if the value is null. */ public boolean containsValue(Object value) { if (value == null) { throw new IllegalArgumentException("Value must not be null"); } for (int s = 0; s < segments.length; ++s) { Segment seg = segments[s]; Entry[] tab; synchronized (seg) { tab = table; } for (int i = s; i < tab.length; i += segments.length) { for (Entry e = tab[i]; e != null; e = e.next) { if (value.equals(e.value)) { return true; } } } } return false; } /** * Tests if some key maps into the specified value in this table. This * operation is more expensive than the containsKey method. * <p> * * Note that this method is identical in functionality to containsValue, * (which is part of the Map interface in the collections framework). * * @param value * a value to search for. * @return true if and only if some key maps to the * value argument in this table as determined by the * equals method; false otherwise. * @exception NullPointerException * if the value is null. * @see #containsKey(Object) * @see #containsValue(Object) * @see Map */ public boolean contains(Object value) { return containsValue(value); } /** * Copies all of the mappings from the specified map to this one. * * These mappings replace any mappings that this map had for any of the keys * currently in the specified Map. * * @param t * Mappings to be stored in this map. */ public void putAll(Map t) { int n = t.size(); if (n == 0) { return; } // Expand enough to hold at least n elements without resizing. // We can only resize table by factor of two at a time. // It is faster to rehash with fewer elements, so do it now. for (;;) { Entry[] tab; int max; synchronized (segments[0]) { // must synch on some segment. pick 0. tab = table; max = threshold * CONCURRENCY_LEVEL; } if (n < max) { break; } resize(0, tab); } for (Iterator it = t.entrySet().iterator(); it.hasNext();) { Map.Entry entry = (Map.Entry)it.next(); put(entry.getKey(), entry.getValue()); } } /** * Removes all mappings from this map. */ public void clear() { // We don"t need all locks at once so long as locks // are obtained in low to high order for (int s = 0; s < segments.length; ++s) { Segment seg = segments[s]; synchronized (seg) { Entry[] tab = table; for (int i = s; i < tab.length; i += segments.length) { for (Entry e = tab[i]; e != null; e = e.next) { e.value = null; } tab[i] = null; seg.count = 0; } } } } /** * Returns a shallow copy of this ConcurrentHashMap instance: the * keys and values themselves are not cloned. * * @return a shallow copy of this map. */ public Object clone() { // We cannot call super.clone, since it would share final segments // array, // and there"s no way to reassign finals. return new ConcurrentHashMap(this); } // Views protected transient Set keySet = null; protected transient Set entrySet = null; protected transient Collection values = null; /** * Returns a set view of the keys contained in this map. The set is backed * by the map, so changes to the map are reflected in the set, and * vice-versa. The set supports element removal, which removes the * corresponding mapping from this map, via the Iterator.remove, * Set.remove, removeAll, retainAll, and * clear operations. It does not support the add or * addAll operations. * * @return a set view of the keys contained in this map. */ public Set keySet() { Set ks = keySet; return (ks != null) ? ks : (keySet = new KeySet()); } private class KeySet extends AbstractSet { /** * @see java.util.Set#iterator() */ public Iterator iterator() { return new KeyIterator(); } /** * @see java.util.Set#size() */ public int size() { return ConcurrentHashMap.this.size(); } /** * @see java.util.Set#contains(java.lang.Object) */ public boolean contains(Object o) { return ConcurrentHashMap.this.containsKey(o); } /** * @see java.util.Set#remove(java.lang.Object) */ public boolean remove(Object o) { return ConcurrentHashMap.this.remove(o) != null; } /** * @see java.util.Set#clear() */ public void clear() { ConcurrentHashMap.this.clear(); } } /** * Returns a collection view of the values contained in this map. The * collection is backed by the map, so changes to the map are reflected in * the collection, and vice-versa. The collection supports element removal, * which removes the corresponding mapping from this map, via the * Iterator.remove, Collection.remove, * removeAll, retainAll, and clear * operations. It does not support the add or addAll * operations. * * @return a collection view of the values contained in this map. */ public Collection values() { Collection vs = values; return (vs != null) ? vs : (values = new Values()); } private class Values extends AbstractCollection { /** * @see java.util.AbstractCollection#iterator() */ public Iterator iterator() { return new ValueIterator(); } /** * @see java.util.AbstractCollection#size() */ public int size() { return ConcurrentHashMap.this.size(); } /** * @see java.util.AbstractCollection#contains(java.lang.Object) */ public boolean contains(Object o) { return ConcurrentHashMap.this.containsValue(o); } /** * @see java.util.AbstractCollection#clear() */ public void clear() { ConcurrentHashMap.this.clear(); } } /** * Returns a collection view of the mappings contained in this map. Each * element in the returned collection is a Map.Entry. The * collection is backed by the map, so changes to the map are reflected in * the collection, and vice-versa. The collection supports element removal, * which removes the corresponding mapping from the map, via the * Iterator.remove, Collection.remove, * removeAll, retainAll, and clear * operations. It does not support the add or addAll * operations. * * @return a collection view of the mappings contained in this map. */ public Set entrySet() { Set es = entrySet; return (es != null) ? es : (entrySet = new EntrySet()); } private class EntrySet extends AbstractSet { /** * @see java.util.Set#iterator() */ public Iterator iterator() { return new HashIterator(); } /** * @see java.util.Set#contains(java.lang.Object) */ public boolean contains(Object o) { if (!(o instanceof Map.Entry)) { return false; } Map.Entry entry = (Map.Entry)o; Object v = ConcurrentHashMap.this.get(entry.getKey()); return v != null && v.equals(entry.getValue()); } /** * @see java.util.Set#remove(java.lang.Object) */ public boolean remove(Object o) { if (!(o instanceof Map.Entry)) { return false; } Map.Entry e = (Map.Entry)o; return ConcurrentHashMap.this.remove(e.getKey(), e.getValue()) != null; } /** * @see java.util.Set#size() */ public int size() { return ConcurrentHashMap.this.size(); } /** * @see java.util.Set#clear() */ public void clear() { ConcurrentHashMap.this.clear(); } } /** * Returns an enumeration of the keys in this table. * * @return an enumeration of the keys in this table. * @see Enumeration * @see #elements() * @see #keySet() * @see Map */ public Enumeration keys() { return new KeyIterator(); } /** * Returns an enumeration of the values in this table. Use the Enumeration * methods on the returned object to fetch the elements sequentially. * * @return an enumeration of the values in this table. * @see java.util.Enumeration * @see #keys() * @see #values() * @see Map */ public Enumeration elements() { return new ValueIterator(); } /** * ConcurrentHashMap collision list entry. */ protected static class Entry implements Map.Entry { /* * The use of volatile for value field ensures that we can detect status * changes without synchronization. The other fields are never changed, * and are marked as final. */ protected final Object key; protected volatile Object value; protected final int hash; protected final Entry next; Entry(int hash, Object key, Object value, Entry next) { this.value = value; this.hash = hash; this.key = key; this.next = next; } // Map.Entry Ops /** * @see java.util.Map.Entry#getKey() */ public Object getKey() { return key; } /** * Get the value. Note: In an entrySet or entrySet.iterator, unless you * can guarantee lack of concurrent modification, * getValue might return null, reflecting the fact * that the entry has been concurrently removed. However, there are no * assurances that concurrent removals will be reflected using this * method. * * @return the current value, or null if the entry has been detectably * removed. */ public Object getValue() { return value; } /** * Set the value of this entry. Note: In an entrySet or * entrySet.iterator), unless you can guarantee lack of concurrent * modification, setValue is not strictly guaranteed to * actually replace the value field obtained via the get * operation of the underlying hash table in multithreaded applications. * If iterator-wide synchronization is not used, and any other * concurrent put or remove operations occur, * sometimes even to other entries, then this change is not * guaranteed to be reflected in the hash table. (It might, or it might * not. There are no assurances either way.) * * @param value * the new value. * @return the previous value, or null if entry has been detectably * removed. * @exception NullPointerException * if the value is null. * */ public Object setValue(Object value) { if (value == null) { throw new IllegalArgumentException("Value must not be null"); } Object oldValue = this.value; this.value = value; return oldValue; } /** * @see java.util.Map.Entry#equals(java.lang.Object) */ public boolean equals(Object o) { if (!(o instanceof Map.Entry)) { return false; } Map.Entry e = (Map.Entry)o; return (key.equals(e.getKey()) && value.equals(e.getValue())); } /** * @see java.util.Map.Entry#hashCode() */ public int hashCode() { return key.hashCode() ^ value.hashCode(); } /** * @see java.lang.Object#toString() */ public String toString() { return key + "=" + value; } } protected class HashIterator implements Iterator, Enumeration { protected final Entry[] tab; // snapshot of table protected int index; // current slot protected Entry entry = null; // current node of slot protected Object currentKey; // key for current node protected Object currentValue; // value for current node protected Entry lastReturned = null; // last node returned by next protected HashIterator() { // force all segments to synch synchronized (segments[0]) { tab = table; } for (int i = 1; i < segments.length; ++i) { segments[i].synch(); } index = tab.length - 1; } /** * @see java.util.Enumeration#hasMoreElements() */ public boolean hasMoreElements() { return hasNext(); } /** * @see java.util.Enumeration#nextElement() */ public Object nextElement() { return next(); } /** * @see java.util.Iterator#hasNext() */ public boolean hasNext() { /* * currentkey and currentValue are set here to ensure that next() * returns normally if hasNext() returns true. This avoids surprises * especially when final element is removed during traversal -- * instead, we just ignore the removal during current traversal. */ for (;;) { if (entry != null) { Object v = entry.value; if (v != null) { currentKey = entry.key; currentValue = v; return true; } else { entry = entry.next; } } while (entry == null && index >= 0) { entry = tab[index--]; } if (entry == null) { currentKey = currentValue = null; return false; } } } protected Object returnValueOfNext() { return entry; } /** * @see java.util.Iterator#next() */ public Object next() { if (currentKey == null && !hasNext()) { throw new NoSuchElementException(); } Object result = returnValueOfNext(); lastReturned = entry; currentKey = currentValue = null; entry = entry.next; return result; } /** * @see java.util.Iterator#remove() */ public void remove() { if (lastReturned == null) { throw new IllegalStateException(); } ConcurrentHashMap.this.remove(lastReturned.key); lastReturned = null; } } protected class KeyIterator extends HashIterator { protected Object returnValueOfNext() { return currentKey; } } protected class ValueIterator extends HashIterator { protected Object returnValueOfNext() { return currentValue; } } /** * Save the state of the ConcurrentHashMap instance to a stream * (i.e., serialize it). * @param s * @throws IOException * * @serialData An estimate of the table size, followed by the key (Object) * and value (Object) for each key-value mapping, followed by a * null pair. The key-value mappings are emitted in no * particular order. */ private void writeObject(java.io.ObjectOutputStream s) throws IOException { // Write out the loadfactor, and any hidden stuff s.defaultWriteObject(); // Write out capacity estimate. It is OK if this // changes during the write, since it is only used by // readObject to set initial capacity, to avoid needless resizings. int cap; synchronized (segments[0]) { cap = table.length; } s.writeInt(cap); // Write out keys and values (alternating) for (int k = 0; k < segments.length; ++k) { Segment seg = segments[k]; Entry[] tab; synchronized (seg) { tab = table; } for (int i = k; i < tab.length; i += segments.length) { for (Entry e = tab[i]; e != null; e = e.next) { s.writeObject(e.key); s.writeObject(e.value); } } } s.writeObject(null); s.writeObject(null); } /** * Reconstitute the ConcurrentHashMap instance from a stream * (i.e., deserialize it). * @param s * @throws IOException * @throws ClassNotFoundException */ private void readObject(java.io.ObjectInputStream s) throws IOException, ClassNotFoundException { // Read in the threshold, loadfactor, and any hidden stuff s.defaultReadObject(); int cap = s.readInt(); table = newTable(cap); for (int i = 0; i < segments.length; ++i) { segments[i] = new Segment(); } // Read the keys and values, and put the mappings in the table for (;;) { Object key = s.readObject(); Object value = s.readObject(); if (key == null) { break; } put(key, value); } } } </source>

A version of Hashtable that supports mostly-concurrent reading, but exclusive writing

   <source lang="java">
 

/*

File: ConcurrentReaderHashMap
Written by Doug Lea. Adapted and released, under explicit
permission, from JDK1.2 HashMap.java and Hashtable.java which
carries the following copyright:
* Copyright 1997 by Sun Microsystems, Inc.,
* 901 San Antonio Road, Palo Alto, California, 94303, U.S.A.
* All rights reserved.
*
* This software is the confidential and proprietary information
* of Sun Microsystems, Inc. ("Confidential Information").  You
* shall not disclose such Confidential Information and shall use
* it only in accordance with the terms of the license agreement
* you entered into with Sun.
History:
Date       Who                What
28oct1999  dl               Created
14dec1999  dl               jmm snapshot
19apr2000  dl               use barrierLock
12jan2001  dl               public release
17nov2001  dl               Minor tunings
20may2002  dl               BarrierLock can now be serialized.
09dec2002  dl               Fix interference checks.
*/

import java.io.IOException; import java.io.Serializable; import java.util.AbstractCollection; import java.util.AbstractMap; import java.util.AbstractSet; import java.util.Collection; import java.util.Enumeration; import java.util.Iterator; import java.util.Map; import java.util.NoSuchElementException; import java.util.Set;

/**

* A version of Hashtable that supports mostly-concurrent reading, but exclusive
* writing. Because reads are not limited to periods without writes, a
* concurrent reader policy is weaker than a classic reader/writer policy, but
* is generally faster and allows more concurrency. This class is a good choice
* especially for tables that are mainly created by one thread during the
* start-up phase of a program, and from then on, are mainly read (with perhaps
* occasional additions or removals) in many threads. If you also need
* concurrency among writes, consider instead using ConcurrentHashMap.
* <p>
* 
* Successful retrievals using get(key) and containsKey(key) usually run without
* locking. Unsuccessful ones (i.e., when the key is not present) do involve
* brief synchronization (locking). Also, the size and isEmpty methods are
* always synchronized.
* 
* <p>
* Because retrieval operations can ordinarily overlap with writing operations
* (i.e., put, remove, and their derivatives), retrievals can only be guaranteed
* to return the results of the most recently completed operations
* holding upon their onset. Retrieval operations may or may not return results
* reflecting in-progress writing operations. However, the retrieval operations
* do always return consistent results -- either those holding before any single
* modification or after it, but never a nonsense result. For aggregate
* operations such as putAll and clear, concurrent reads may reflect insertion
* or removal of only some entries. In those rare contexts in which you use a
* hash table to synchronize operations across threads (for example, to prevent
* reads until after clears), you should either encase operations in
* synchronized blocks, or instead use java.util.Hashtable.
* 
* <p>
* 
* This class also supports optional guaranteed exclusive reads, simply by
* surrounding a call within a synchronized block, as in 
* ConcurrentReaderHashMap t; ... Object v;
* synchronized(t) { v = t.get(k); }

* * But this is not usually necessary in practice. For example, it is generally * inefficient to write: *
*
 *             ConcurrentReaderHashMap t; ...            // Inefficient version
 *             Object key; ...
 *             Object value; ...
 *             synchronized(t) { 
 *               if (!t.containsKey(key))
 *                 t.put(key, value);
 *                 // other code if not previously present
 *               }
 *               else {
 *                 // other code if it was previously present
 *               }
 *             }
 * 
* 
* Instead, if the values are intended to be the same in each case, just take
* advantage of the fact that put returns null if the key was not previously
* present:
* 
*
 *             ConcurrentReaderHashMap t; ...                // Use this instead
 *             Object key; ...
 *             Object value; ...
 *             Object oldValue = t.put(key, value);
 *             if (oldValue == null) {
 *               // other code if not previously present
 *             }
 *             else {
 *               // other code if it was previously present
 *             }
 * 
* 
* <p>
* 
* Iterators and Enumerations (i.e., those returned by keySet().iterator(),
* entrySet().iterator(), values().iterator(), keys(), and elements()) return
* elements reflecting the state of the hash table at some point at or since the
* creation of the iterator/enumeration. They will return at most one instance
* of each element (via next()/nextElement()), but might or might not reflect
* puts and removes that have been processed since they were created. They do
* not throw ConcurrentModificationException. However, these
* iterators are designed to be used by only one thread at a time. Sharing an
* iterator across multiple threads may lead to unpredictable results if the
* table is being concurrently modified. Again, you can ensure interference-free
* iteration by enclosing the iteration in a synchronized block.
* <p>
* 
* This class may be used as a direct replacement for any use of
* java.util.Hashtable that does not depend on readers being blocked during
* updates. Like Hashtable but unlike java.util.HashMap, this class does NOT
* allow null to be used as a key or value. This class is also
* typically faster than ConcurrentHashMap when there is usually only one thread
* updating the table, but possibly many retrieving values from it.
* <p>
* 
* Implementation note: A slightly faster implementation of this class will be
* possible once planned Java Memory Model revisions are in place.
* 
* <p>[]
* 
*/

public class ConcurrentReaderHashMap extends AbstractMap implements Map, Cloneable, Serializable {

 private static final long serialVersionUID = 1L;
 /*
  * The basic strategy is an optimistic-style scheme based on the guarantee
  * that the hash table and its lists are always kept in a consistent enough
  * state to be read without locking:
  * 
  * Read operations first proceed without locking, by traversing the
  * apparently correct list of the apparently correct bin. If an entry is
  * found, but not invalidated (value field null), it is returned. If not
  * found, operations must recheck (after a memory barrier) to make sure they
  * are using both the right list and the right table (which can change under
  * resizes). If invalidated, reads must acquire main update lock to wait out
  * the update, and then re-traverse.
  * 
  * All list additions are at the front of each bin, making it easy to check
  * changes, and also fast to traverse. Entry next pointers are never
  * assigned. Remove() builds new nodes when necessary to preserve this.
  * 
  * Remove() (also clear()) invalidates removed nodes to alert read
  * operations that they must wait out the full modifications.
  * 
  */
 /** A Serializable class for barrier lock * */
 protected static class BarrierLock implements java.io.Serializable
 {
   private static final long serialVersionUID = 1L;
 }
 /**
  * Lock used only for its memory effects.
  */
 protected final BarrierLock barrierLock = new BarrierLock();
 /**
  * field written to only to guarantee lock ordering.
  */
 protected transient Object lastWrite;
 /**
  * Force a memory synchronization that will cause all readers to see table.
  * Call only when already holding main synch lock.
  * @param x 
  */
 protected final void recordModification(Object x)
 {
   synchronized (barrierLock)
   {
     lastWrite = x;
   }
 }
 /**
  * Get ref to table; the reference and the cells it accesses will be at
  * least as fresh as from last use of barrierLock
  * @return table cells
  */
 protected final Entry[] getTableForReading()
 {
   synchronized (barrierLock)
   {
     return table;
   }
 }
 /**
  * The default initial number of table slots for this table (32). Used when
  * not otherwise specified in constructor.
  */
 public static final int DEFAULT_INITIAL_CAPACITY = 32;
 /**
  * The minimum capacity, used if a lower value is implicitly specified by
  * either of the constructors with arguments. MUST be a power of two.
  */
 private static final int MINIMUM_CAPACITY = 4;
 /**
  * The maximum capacity, used if a higher value is implicitly specified by
  * either of the constructors with arguments. MUST be a power of two <= 1<<30.
  */
 private static final int MAXIMUM_CAPACITY = 1 << 30;
 /**
  * The default load factor for this table (1.0). Used when not otherwise
  * specified in constructor.
  */
 public static final float DEFAULT_LOAD_FACTOR = 0.75f;
 /**
  * The hash table data.
  */
 protected transient Entry[] table;
 /**
  * The total number of mappings in the hash table.
  */
 protected transient int count;
 /**
  * The table is rehashed when its size exceeds this threshold. (The value of
  * this field is always (int)(capacity * loadFactor).)
  * 
  * @serial
  */
 protected int threshold;
 /**
  * The load factor for the hash table.
  * 
  * @serial
  */
 protected float loadFactor;
 /**
  * Returns the appropriate capacity (power of two) for the specified initial
  * capacity argument.
  * @param initialCapacity
  * @return appropriate capacity
  */
 private int p2capacity(int initialCapacity)
 {
   int cap = initialCapacity;
   // Compute the appropriate capacity
   int result;
   if (cap > MAXIMUM_CAPACITY || cap < 0)
   {
     result = MAXIMUM_CAPACITY;
   }
   else
   {
     result = MINIMUM_CAPACITY;
     while (result < cap)
     {
       result <<= 1;
     }
   }
   return result;
 }
 /**
  * Return hash code for Object x. Since we are using power-of-two tables, it
  * is worth the effort to improve hashcode via the same multiplicative
  * scheme as used in IdentityHashMap.
  * @param x 
  * @return hash code
  */
 private static int hash(Object x)
 {
   int h = x.hashCode();
   // Multiply by 127 (quickly, via shifts), and mix in some high
   // bits to help guard against bunching of codes that are
   // consecutive or equally spaced.
   return ((h << 7) - h + (h >>> 9) + (h >>> 17));
 }
 /**
  * Check for equality of non-null references x and y.
  * @param x 
  * @param y 
  * @return equality
  */
 protected boolean eq(Object x, Object y)
 {
   return x == y || x.equals(y);
 }
 /**
  * Constructs a new, empty map with the specified initial capacity and the
  * specified load factor.
  * 
  * @param initialCapacity
  *            the initial capacity The actual initial capacity is rounded to
  *            the nearest power of two.
  * @param loadFactor
  *            the load factor of the ConcurrentReaderHashMap
  * @throws IllegalArgumentException
  *             if the initial maximum number of elements is less than zero,
  *             or if the load factor is nonpositive.
  */
 public ConcurrentReaderHashMap(int initialCapacity, float loadFactor)
 {
   if (loadFactor <= 0)
   {
     throw new IllegalArgumentException("Illegal Load factor: " + loadFactor);
   }
   this.loadFactor = loadFactor;
   int cap = p2capacity(initialCapacity);
   table = new Entry[cap];
   threshold = (int)(cap * loadFactor);
 }
 /**
  * Constructs a new, empty map with the specified initial capacity and
  * default load factor.
  * 
  * @param initialCapacity
  *            the initial capacity of the ConcurrentReaderHashMap.
  * @throws IllegalArgumentException
  *             if the initial maximum number of elements is less than zero.
  */
 public ConcurrentReaderHashMap(int initialCapacity)
 {
   this(initialCapacity, DEFAULT_LOAD_FACTOR);
 }
 /**
  * Constructs a new, empty map with a default initial capacity and load
  * factor.
  */
 public ConcurrentReaderHashMap()
 {
   this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR);
 }
 /**
  * Constructs a new map with the same mappings as the given map. The map is
  * created with a capacity of twice the number of mappings in the given map
  * or 16 (whichever is greater), and a default load factor.
  * @param t 
  */
 public ConcurrentReaderHashMap(Map t)
 {
   this(Math.max((int)(t.size() / DEFAULT_LOAD_FACTOR) + 1, 16), DEFAULT_LOAD_FACTOR);
   putAll(t);
 }
 /**
  * Returns the number of key-value mappings in this map.
  * 
  * @return the number of key-value mappings in this map.
  */
 public synchronized int size()
 {
   return count;
 }
 /**
  * Returns true if this map contains no key-value mappings.
  * 
  * @return true if this map contains no key-value mappings.
  */
 public synchronized boolean isEmpty()
 {
   return count == 0;
 }
 /**
  * Returns the value to which the specified key is mapped in this table.
  * 
  * @param key
  *            a key in the table.
  * @return the value to which the key is mapped in this table;
  *         null if the key is not mapped to any value in this
  *         table.
  * @exception NullPointerException
  *                if the key is null.
  * @see #put(Object, Object)
  */
 public Object get(Object key)
 {
   // throw null pointer exception if key null
   int hash = hash(key);
   /*
    * Start off at the apparently correct bin. If entry is found, we need
    * to check after a barrier anyway. If not found, we need a barrier to
    * check if we are actually in right bin. So either way, we encounter
    * only one barrier unless we need to retry. And we only need to fully
    * synchronize if there have been concurrent modifications.
    */
   Entry[] tab = table;
   int index = hash & (tab.length - 1);
   Entry first = tab[index];
   Entry e = first;
   for (;;)
   {
     if (e == null)
     {
       // If key apparently not there, check to
       // make sure this was a valid read
       Entry[] reread = getTableForReading();
       if (tab == reread && first == tab[index])
       {
         return null;
       }
       else
       {
         // Wrong list -- must restart traversal at new first
         tab = reread;
         e = first = tab[index = hash & (tab.length - 1)];
       }
     }
     else if (e.hash == hash && eq(key, e.key))
     {
       Object value = e.value;
       if (value != null)
       {
         return value;
       }
       // Entry was invalidated during deletion. But it could
       // have been re-inserted, so we must retraverse.
       // To avoid useless contention, get lock to wait out
       // modifications
       // before retraversing.
       synchronized (this)
       {
         tab = table;
       }
       e = first = tab[index = hash & (tab.length - 1)];
     }
     else
     {
       e = e.next;
     }
   }
 }
 /**
  * Tests if the specified object is a key in this table.
  * 
  * @param key
  *            possible key.
  * @return true if and only if the specified object is a key
  *         in this table, as determined by the equals method;
  *         false otherwise.
  * @exception NullPointerException
  *                if the key is null.
  * @see #contains(Object)
  */
 public boolean containsKey(Object key)
 {
   return get(key) != null;
 }
 /**
  * Maps the specified key to the specified value
  * in this table. Neither the key nor the value can be null.
  * <p>
  * 
  * The value can be retrieved by calling the get method with
  * a key that is equal to the original key.
  * 
  * @param key
  *            the table key.
  * @param value
  *            the value.
  * @return the previous value of the specified key in this table, or
  *         null if it did not have one.
  * @exception NullPointerException
  *                if the key or value is null.
  * @see Object#equals(Object)
  * @see #get(Object)
  */
 public Object put(Object key, Object value)
 {
   if (value == null)
   {
     throw new IllegalArgumentException("Value must not be null");
   }
   int hash = hash(key);
   Entry[] tab = table;
   int index = hash & (tab.length - 1);
   Entry first = tab[index];
   Entry e;
   for (e = first; e != null; e = e.next)
   {
     if (e.hash == hash && eq(key, e.key))
     {
       break;
     }
   }
   synchronized (this)
   {
     if (tab == table)
     {
       if (e == null)
       {
         // make sure we are adding to correct list
         if (first == tab[index])
         {
           // Add to front of list
           Entry newEntry = new Entry(hash, key, value, first);
           tab[index] = newEntry;
           if (++count >= threshold)
           {
             rehash();
           }
           else
           {
             recordModification(newEntry);
           }
           return null;
         }
       }
       else
       {
         Object oldValue = e.value;
         if (first == tab[index] && oldValue != null)
         {
           e.value = value;
           return oldValue;
         }
       }
     }
     // retry if wrong list or lost race against concurrent remove
     return sput(key, value, hash);
   }
 }
 /**
  * Continuation of put(), called only when synch lock is held and
  * interference has been detected.
  * @param key 
  * @param value 
  * @param hash 
  * @return continuation object
  */
 protected Object sput(Object key, Object value, int hash)
 {
   Entry[] tab = table;
   int index = hash & (tab.length - 1);
   Entry first = tab[index];
   Entry e = first;
   for (;;)
   {
     if (e == null)
     {
       Entry newEntry = new Entry(hash, key, value, first);
       tab[index] = newEntry;
       if (++count >= threshold)
       {
         rehash();
       }
       else
       {
         recordModification(newEntry);
       }
       return null;
     }
     else if (e.hash == hash && eq(key, e.key))
     {
       Object oldValue = e.value;
       e.value = value;
       return oldValue;
     }
     else
     {
       e = e.next;
     }
   }
 }
 /**
  * Rehashes the contents of this map into a new table with a larger
  * capacity. This method is called automatically when the number of keys in
  * this map exceeds its capacity and load factor.
  */
 protected void rehash()
 {
   Entry[] oldTable = table;
   int oldCapacity = oldTable.length;
   if (oldCapacity >= MAXIMUM_CAPACITY)
   {
     threshold = Integer.MAX_VALUE; // avoid retriggering
     return;
   }
   int newCapacity = oldCapacity << 1;
   int mask = newCapacity - 1;
   threshold = (int)(newCapacity * loadFactor);
   Entry[] newTable = new Entry[newCapacity];
   /*
    * Reclassify nodes in each list to new Map. Because we are using
    * power-of-two expansion, the elements from each bin must either stay
    * at same index, or move to oldCapacity+index. We also eliminate
    * unnecessary node creation by catching cases where old nodes can be
    * reused because their next fields won"t change. Statistically, at the
    * default threshhold, only about one-sixth of them need cloning. (The
    * nodes they replace will be garbage collectable as soon as they are no
    * longer referenced by any reader thread that may be in the midst of
    * traversing table right now.)
    */
   for (int i = 0; i < oldCapacity; i++)
   {
     // We need to guarantee that any existing reads of old Map can
     // proceed. So we cannot yet null out each bin.
     Entry e = oldTable[i];
     if (e != null)
     {
       int idx = e.hash & mask;
       Entry next = e.next;
       // Single node on list
       if (next == null)
       {
         newTable[idx] = e;
       }
       else
       {
         // Reuse trailing consecutive sequence of all same bit
         Entry lastRun = e;
         int lastIdx = idx;
         for (Entry last = next; last != null; last = last.next)
         {
           int k = last.hash & mask;
           if (k != lastIdx)
           {
             lastIdx = k;
             lastRun = last;
           }
         }
         newTable[lastIdx] = lastRun;
         // Clone all remaining nodes
         for (Entry p = e; p != lastRun; p = p.next)
         {
           int k = p.hash & mask;
           newTable[k] = new Entry(p.hash, p.key, p.value, newTable[k]);
         }
       }
     }
   }
   table = newTable;
   recordModification(newTable);
 }
 /**
  * Removes the key (and its corresponding value) from this table. This
  * method does nothing if the key is not in the table.
  * 
  * @param key
  *            the key that needs to be removed.
  * @return the value to which the key had been mapped in this table, or
  *         null if the key did not have a mapping.
  * @exception NullPointerException
  *                if the key is null.
  */
 public Object remove(Object key)
 {
   /*
    * Find the entry, then 1. Set value field to null, to force get() to
    * retry 2. Rebuild the list without this entry. All entries following
    * removed node can stay in list, but all preceeding ones need to be
    * cloned. Traversals rely on this strategy to ensure that elements will
    * not be repeated during iteration.
    */
   int hash = hash(key);
   Entry[] tab = table;
   int index = hash & (tab.length - 1);
   Entry first = tab[index];
   Entry e = first;
   for (e = first; e != null; e = e.next)
   {
     if (e.hash == hash && eq(key, e.key))
     {
       break;
     }
   }
   synchronized (this)
   {
     if (tab == table)
     {
       if (e == null)
       {
         if (first == tab[index])
         {
           return null;
         }
       }
       else
       {
         Object oldValue = e.value;
         if (first == tab[index] && oldValue != null)
         {
           e.value = null;
           count--;
           Entry head = e.next;
           for (Entry p = first; p != e; p = p.next)
           {
             head = new Entry(p.hash, p.key, p.value, head);
           }
           tab[index] = head;
           recordModification(head);
           return oldValue;
         }
       }
     }
     // Wrong list or interference
     return sremove(key, hash);
   }
 }
 /**
  * Continuation of remove(), called only when synch lock is held and
  * interference has been detected.
  * @param key 
  * @param hash 
  * @return continuation object
  */
 protected Object sremove(Object key, int hash)
 {
   Entry[] tab = table;
   int index = hash & (tab.length - 1);
   Entry first = tab[index];
   for (Entry e = first; e != null; e = e.next)
   {
     if (e.hash == hash && eq(key, e.key))
     {
       Object oldValue = e.value;
       e.value = null;
       count--;
       Entry head = e.next;
       for (Entry p = first; p != e; p = p.next)
       {
         head = new Entry(p.hash, p.key, p.value, head);
       }
       tab[index] = head;
       recordModification(head);
       return oldValue;
     }
   }
   return null;
 }
 /**
  * Returns true if this map maps one or more keys to the
  * specified value. Note: This method requires a full internal traversal of
  * the hash table, and so is much slower than method containsKey.
  * 
  * @param value
  *            value whose presence in this map is to be tested.
  * @return true if this map maps one or more keys to the
  *         specified value.
  * @exception NullPointerException
  *                if the value is null.
  */
 public boolean containsValue(Object value)
 {
   if (value == null)
   {
     throw new IllegalArgumentException("Value must not be null");
   }
   
   Entry tab[] = getTableForReading();
   for (int i = 0; i < tab.length; ++i)
   {
     for (Entry e = tab[i]; e != null; e = e.next)
     {
       if (value.equals(e.value))
       {
         return true;
       }
     }
   }
   return false;
 }
 /**
  * Tests if some key maps into the specified value in this table. This
  * operation is more expensive than the containsKey method.
  * <p>
  * 
  * Note that this method is identical in functionality to containsValue,
  * (which is part of the Map interface in the collections framework).
  * 
  * @param value
  *            a value to search for.
  * @return true if and only if some key maps to the
  *         value argument in this table as determined by the
  *         equals method; false otherwise.
  * @exception NullPointerException
  *                if the value is null.
  * @see #containsKey(Object)
  * @see #containsValue(Object)
  * @see Map
  */
 public boolean contains(Object value)
 {
   return containsValue(value);
 }
 /**
  * Copies all of the mappings from the specified map to this one.
  * 
  * These mappings replace any mappings that this map had for any of the keys
  * currently in the specified Map.
  * 
  * @param t
  *            Mappings to be stored in this map.
  */
 public synchronized void putAll(Map t)
 {
   int n = t.size();
   if (n == 0)
   {
     return;
   }
   // Expand enough to hold at least n elements without resizing.
   // We can only resize table by factor of two at a time.
   // It is faster to rehash with fewer elements, so do it now.
   while (n >= threshold)
   {
     rehash();
   }
   for (Iterator it = t.entrySet().iterator(); it.hasNext();)
   {
     Map.Entry entry = (Map.Entry)it.next();
     Object key = entry.getKey();
     Object value = entry.getValue();
     put(key, value);
   }
 }
 /**
  * Removes all mappings from this map.
  */
 public synchronized void clear()
 {
   Entry tab[] = table;
   for (int i = 0; i < tab.length; ++i)
   {
     // must invalidate all to force concurrent get"s to wait and then
     // retry
     for (Entry e = tab[i]; e != null; e = e.next)
     {
       e.value = null;
     }
     tab[i] = null;
   }
   count = 0;
   recordModification(tab);
 }
 /**
  * Returns a shallow copy of this ConcurrentReaderHashMap
  * instance: the keys and values themselves are not cloned.
  * 
  * @return a shallow copy of this map.
  */
 public synchronized Object clone() throws CloneNotSupportedException
 {
   try
   {
     ConcurrentReaderHashMap t = (ConcurrentReaderHashMap)super.clone();
     t.keySet = null;
     t.entrySet = null;
     t.values = null;
     Entry[] tab = table;
     t.table = new Entry[tab.length];
     Entry[] ttab = t.table;
     for (int i = 0; i < tab.length; ++i)
     {
       Entry first = null;
       for (Entry e = tab[i]; e != null; e = e.next)
       {
         first = new Entry(e.hash, e.key, e.value, first);
       }
       ttab[i] = first;
     }
     return t;
   }
   catch (CloneNotSupportedException e)
   {
     // this shouldn"t happen, since we are Cloneable
     throw new InternalError();
   }
 }
 // Views
 protected transient Set keySet = null;
 protected transient Set entrySet = null;
 protected transient Collection values = null;
 /**
  * Returns a set view of the keys contained in this map. The set is backed
  * by the map, so changes to the map are reflected in the set, and
  * vice-versa. The set supports element removal, which removes the
  * corresponding mapping from this map, via the Iterator.remove,
  * Set.remove, removeAll, retainAll, and
  * clear operations. It does not support the add or
  * addAll operations.
  * 
  * @return a set view of the keys contained in this map.
  */
 public Set keySet()
 {
   Set ks = keySet;
   return (ks != null) ? ks : (keySet = new KeySet());
 }
 private class KeySet extends AbstractSet
 {
   /**
    * @see Collection#iterator()
    */
   public Iterator iterator()
   {
     return new KeyIterator();
   }
   /**
    * @see Collection#size()
    */
   public int size()
   {
     return ConcurrentReaderHashMap.this.size();
   }
   /**
    * @see Collection#contains(java.lang.Object)
    */
   public boolean contains(Object o)
   {
     return ConcurrentReaderHashMap.this.containsKey(o);
   }
   /**
    * @see Collection#remove(java.lang.Object)
    */
   public boolean remove(Object o)
   {
     return ConcurrentReaderHashMap.this.remove(o) != null;
   }
   /**
    * @see Collection#clear()
    */
   public void clear()
   {
     ConcurrentReaderHashMap.this.clear();
   }
 }
 /**
  * Returns a collection view of the values contained in this map. The
  * collection is backed by the map, so changes to the map are reflected in
  * the collection, and vice-versa. The collection supports element removal,
  * which removes the corresponding mapping from this map, via the
  * Iterator.remove, Collection.remove,
  * removeAll, retainAll, and clear
  * operations. It does not support the add or addAll
  * operations.
  * 
  * @return a collection view of the values contained in this map.
  */
 public Collection values()
 {
   Collection vs = values;
   return (vs != null) ? vs : (values = new Values());
 }
 private class Values extends AbstractCollection
 {
   /**
    * @see Collection#iterator()
    */
   public Iterator iterator()
   {
     return new ValueIterator();
   }
   /**
    * @see Collection#size()
    */
   public int size()
   {
     return ConcurrentReaderHashMap.this.size();
   }
   /**
    * @see Collection#contains(java.lang.Object)
    */
   public boolean contains(Object o)
   {
     return ConcurrentReaderHashMap.this.containsValue(o);
   }
   /**
    * @see Collection#clear()
    */
   public void clear()
   {
     ConcurrentReaderHashMap.this.clear();
   }
 }
 /**
  * Returns a collection view of the mappings contained in this map. Each
  * element in the returned collection is a Map.Entry. The
  * collection is backed by the map, so changes to the map are reflected in
  * the collection, and vice-versa. The collection supports element removal,
  * which removes the corresponding mapping from the map, via the
  * Iterator.remove, Collection.remove,
  * removeAll, retainAll, and clear
  * operations. It does not support the add or addAll
  * operations.
  * 
  * @return a collection view of the mappings contained in this map.
  */
 public Set entrySet()
 {
   Set es = entrySet;
   return (es != null) ? es : (entrySet = new EntrySet());
 }
 private class EntrySet extends AbstractSet
 {
   /**
    * @see Collection#iterator()
    */
   public Iterator iterator()
   {
     return new HashIterator();
   }
   /**
    * @see Collection#contains(java.lang.Object)
    */
   public boolean contains(Object o)
   {
     if (!(o instanceof Map.Entry))
     {
       return false;
     }
     Map.Entry entry = (Map.Entry)o;
     Object v = ConcurrentReaderHashMap.this.get(entry.getKey());
     return v != null && v.equals(entry.getValue());
   }
   /**
    * @see Collection#remove(java.lang.Object)
    */
   public boolean remove(Object o)
   {
     if (!(o instanceof Map.Entry))
     {
       return false;
     }
     return ConcurrentReaderHashMap.this.findAndRemoveEntry((Map.Entry)o);
   }
   /**
    * @see Collection#size()
    */
   public int size()
   {
     return ConcurrentReaderHashMap.this.size();
   }
   /**
    * @see Collection#clear()
    */
   public void clear()
   {
     ConcurrentReaderHashMap.this.clear();
   }
 }
 /**
  * Helper method for entrySet.remove
  * 
  * @param entry
  * 
  * @return true when the element was found and removed.
  */
 protected synchronized boolean findAndRemoveEntry(Map.Entry entry)
 {
   Object key = entry.getKey();
   Object v = get(key);
   if (v != null && v.equals(entry.getValue()))
   {
     remove(key);
     return true;
   }
   else
   {
     return false;
   }
 }
 /**
  * Returns an enumeration of the keys in this table.
  * 
  * @return an enumeration of the keys in this table.
  * @see Enumeration
  * @see #elements()
  * @see #keySet()
  * @see Map
  */
 public Enumeration keys()
 {
   return new KeyIterator();
 }
 /**
  * Returns an enumeration of the values in this table. Use the Enumeration
  * methods on the returned object to fetch the elements sequentially.
  * 
  * @return an enumeration of the values in this table.
  * @see java.util.Enumeration
  * @see #keys()
  * @see #values()
  * @see Map
  */
 public Enumeration elements()
 {
   return new ValueIterator();
 }
 /**
  * ConcurrentReaderHashMap collision list entry.
  */
 protected static class Entry implements Map.Entry
 {
   /*
    * The use of volatile for value field ensures that we can detect status
    * changes without synchronization. The other fields are never changed,
    * and are marked as final.
    */
   protected final int hash;
   protected final Object key;
   protected final Entry next;
   protected volatile Object value;
   Entry(int hash, Object key, Object value, Entry next)
   {
     this.hash = hash;
     this.key = key;
     this.next = next;
     this.value = value;
   }
   // Map.Entry Ops
   /**
    * @see Map.Entry#getKey()
    */
   public Object getKey()
   {
     return key;
   }
   /**
    * Get the value. Note: In an entrySet or entrySet.iterator, unless the
    * set or iterator is used under synchronization of the table as a whole
    * (or you can otherwise guarantee lack of concurrent modification),
    * getValue might return null, reflecting the fact
    * that the entry has been concurrently removed. However, there are no
    * assurances that concurrent removals will be reflected using this
    * method.
    * 
    * @return the current value, or null if the entry has been detectably
    *         removed.
    */
   public Object getValue()
   {
     return value;
   }
   /**
    * Set the value of this entry. Note: In an entrySet or
    * entrySet.iterator), unless the set or iterator is used under
    * synchronization of the table as a whole (or you can otherwise
    * guarantee lack of concurrent modification), setValue is
    * not strictly guaranteed to actually replace the value field obtained
    * via the get operation of the underlying hash table in
    * multithreaded applications. If iterator-wide synchronization is not
    * used, and any other concurrent put or remove
    * operations occur, sometimes even to other entries, then
    * this change is not guaranteed to be reflected in the hash table. (It
    * might, or it might not. There are no assurances either way.)
    * 
    * @param value
    *            the new value.
    * @return the previous value, or null if entry has been detectably
    *         removed.
    * @exception NullPointerException
    *                if the value is null.
    * 
    */
   public Object setValue(Object value)
   {
     if (value == null)
     {
       throw new IllegalArgumentException("Value must not be null");
     }
     
     Object oldValue = this.value;
     this.value = value;
     return oldValue;
   }
   /**
    * @see Object#equals(java.lang.Object)
    */
   public boolean equals(Object o)
   {
     if (!(o instanceof Map.Entry))
     {
       return false;
     }
     Map.Entry e = (Map.Entry)o;
     return (key.equals(e.getKey()) && value.equals(e.getValue()));
   }
   /**
    * @see Object#hashCode()
    */
   public int hashCode()
   {
     return key.hashCode() ^ value.hashCode();
   }
   /**
    * @see Object#toString()
    */
   public String toString()
   {
     return key + "=" + value;
   }
 }
 protected class HashIterator implements Iterator, Enumeration
 {
   protected final Entry[] tab; // snapshot of table
   protected int index; // current slot
   protected Entry entry = null; // current node of slot
   protected Object currentKey; // key for current node
   protected Object currentValue; // value for current node
   protected Entry lastReturned = null; // last node returned by next
   protected HashIterator()
   {
     tab = ConcurrentReaderHashMap.this.getTableForReading();
     index = tab.length - 1;
   }
   /**
    * @see Enumeration#hasMoreElements()
    */
   public boolean hasMoreElements()
   {
     return hasNext();
   }
   /**
    * @see Enumeration#nextElement()
    */
   public Object nextElement()
   {
     return next();
   }
   /**
    * @see Iterator#hasNext()
    */
   public boolean hasNext()
   {
     /*
      * currentkey and currentValue are set here to ensure that next()
      * returns normally if hasNext() returns true. This avoids surprises
      * especially when final element is removed during traversal --
      * instead, we just ignore the removal during current traversal.
      */
     for (;;)
     {
       if (entry != null)
       {
         Object v = entry.value;
         if (v != null)
         {
           currentKey = entry.key;
           currentValue = v;
           return true;
         }
         else
         {
           entry = entry.next;
         }
       }
       while (entry == null && index >= 0)
       {
         entry = tab[index--];
       }
       if (entry == null)
       {
         currentKey = currentValue = null;
         return false;
       }
     }
   }
   protected Object returnValueOfNext()
   {
     return entry;
   }
   /**
    * @see Iterator#next()
    */
   public Object next()
   {
     if (currentKey == null && !hasNext())
     {
       throw new NoSuchElementException();
     }
     Object result = returnValueOfNext();
     lastReturned = entry;
     currentKey = currentValue = null;
     entry = entry.next;
     return result;
   }
   /**
    * @see Iterator#remove()
    */
   public void remove()
   {
     if (lastReturned == null)
     {
       throw new IllegalStateException();
     }
     ConcurrentReaderHashMap.this.remove(lastReturned.key);
     lastReturned = null;
   }
 }
 protected class KeyIterator extends HashIterator
 {
   protected Object returnValueOfNext()
   {
     return currentKey;
   }
 }
 protected class ValueIterator extends HashIterator
 {
   protected Object returnValueOfNext()
   {
     return currentValue;
   }
 }
 /**
  * Save the state of the ConcurrentReaderHashMap instance to a
  * stream (i.e., serialize it).
  * @param s 
  * @throws IOException 
  * 
  * @serialData The capacity of the ConcurrentReaderHashMap (the
  *             length of the bucket array) is emitted (int), followed by the
  *             size of the ConcurrentReaderHashMap (the number of
  *             key-value mappings), followed by the key (Object) and value
  *             (Object) for each key-value mapping represented by the
  *             ConcurrentReaderHashMap The key-value mappings are emitted in
  *             no particular order.
  */
 private synchronized void writeObject(java.io.ObjectOutputStream s) throws IOException
 {
   // Write out the threshold, loadfactor, and any hidden stuff
   s.defaultWriteObject();
   // Write out number of buckets
   s.writeInt(table.length);
   // Write out size (number of Mappings)
   s.writeInt(count);
   // Write out keys and values (alternating)
   for (int index = table.length - 1; index >= 0; index--)
   {
     Entry entry = table[index];
     while (entry != null)
     {
       s.writeObject(entry.key);
       s.writeObject(entry.value);
       entry = entry.next;
     }
   }
 }
 /**
  * Reconstitute the ConcurrentReaderHashMap instance from a
  * stream (i.e., deserialize it).
  * @param s 
  * @throws IOException 
  * @throws ClassNotFoundException 
  */
 private synchronized void readObject(java.io.ObjectInputStream s) throws IOException,
     ClassNotFoundException
 {
   // Read in the threshold, loadfactor, and any hidden stuff
   s.defaultReadObject();
   // Read in number of buckets and allocate the bucket array;
   int numBuckets = s.readInt();
   table = new Entry[numBuckets];
   // Read in size (number of Mappings)
   int size = s.readInt();
   // Read the keys and values, and put the mappings in the table
   for (int i = 0; i < size; i++)
   {
     Object key = s.readObject();
     Object value = s.readObject();
     put(key, value);
   }
 }
 /**
  * Return the number of slots in this table
  * @return number of slots in this table
  */
 public synchronized int capacity()
 {
   return table.length;
 }
 /**
  * Return the load factor
  * @return the load factor
  */
 public float loadFactor()
 {
   return loadFactor;
 }

}


 </source>
   
  
 
  



Concurrent Doubly LinkedList

   <source lang="java">
 

/*

* Written by Doug Lea with assistance from members of JCP JSR-166
* Expert Group and released to the public domain, as explained at
* http://creativecommons.org/licenses/publicdomain
*/

import java.util.AbstractCollection; import java.util.ArrayList; import java.util.Collection; import java.util.ConcurrentModificationException; import java.util.Deque; import java.util.Iterator; import java.util.NoSuchElementException; import java.util.concurrent.atomic.AtomicReference; /**

* A concurrent linked-list implementation of a {@link Deque} (double-ended
* queue). Concurrent insertion, removal, and access operations execute safely
* across multiple threads. Iterators are weakly consistent, returning
* elements reflecting the state of the deque at some point at or since the
* creation of the iterator. They do not throw {@link
* ConcurrentModificationException}, and may proceed concurrently with other
* operations.
* 
* <p>
* This class and its iterators implement all of the optional methods
* of the {@link Collection} and {@link Iterator} interfaces. Like most other
* concurrent collection implementations, this class does not permit the use of
* null elements. because some null arguments and return values
* cannot be reliably distinguished from the absence of elements. Arbitrarily,
* the {@link Collection#remove} method is mapped to
* removeFirstOccurrence, and {@link Collection#add} is mapped to
* addLast.
* 
* <p>
* Beware that, unlike in most collections, the size method is
* NOT a constant-time operation. Because of the asynchronous nature
* of these deques, determining the current number of elements requires a
* traversal of the elements.
* 
* <p>
* This class is Serializable, but relies on default serialization
* mechanisms. Usually, it is a better idea for any serializable class using a
* ConcurrentLinkedDeque to instead serialize a snapshot of the
* elements obtained by method toArray.
* 
* @author Doug Lea
* @param <E>
*            the type of elements held in this collection
*/

public class ConcurrentDoublyLinkedList<E> extends AbstractCollection<E>

   implements java.io.Serializable {
 /*
  * This is an adaptation of an algorithm described in Paul Martin"s "A
  * Practical Lock-Free Doubly-Linked List". Sun Labs Tech report. The basic
  * idea is to primarily rely on next-pointers to ensure consistency.
  * Prev-pointers are in part optimistic, reconstructed using forward
  * pointers as needed. The main forward list uses a variant of HM-list
  * algorithm similar to the one used in ConcurrentSkipListMap class, but a
  * little simpler. It is also basically similar to the approach in Edya
  * Ladan-Mozes and Nir Shavit "An Optimistic Approach to Lock-Free FIFO
  * Queues" in DISC04.
  * 
  * Quoting a summary in Paul Martin"s tech report:
  * 
  * All cleanups work to maintain these invariants: (1) forward pointers are
  * the ground truth. (2) forward pointers to dead nodes can be improved by
  * swinging them further forward around the dead node. (2.1) forward
  * pointers are still correct when pointing to dead nodes, and forward
  * pointers from dead nodes are left as they were when the node was deleted.
  * (2.2) multiple dead nodes may point forward to the same node. (3)
  * backward pointers were correct when they were installed (3.1) backward
  * pointers are correct when pointing to any node which points forward to
  * them, but since more than one forward pointer may point to them, the live
  * one is best. (4) backward pointers that are out of date due to deletion
  * point to a deleted node, and need to point further back until they point
  * to the live node that points to their source. (5) backward pointers that
  * are out of date due to insertion point too far backwards, so shortening
  * their scope (by searching forward) fixes them. (6) backward pointers from
  * a dead node cannot be "improved" since there may be no live node pointing
  * forward to their origin. (However, it does no harm to try to improve them
  * while racing with a deletion.)
  * 
  * 
  * Notation guide for local variables n, b, f : a node, its predecessor, and
  * successor s : some other successor
  */
 
 // Minor convenience utilities
 /**
  * Returns true if given reference is non-null and isn"t a header, trailer,
  * or marker.
  * 
  * @param n
  *            (possibly null) node
  * @return true if n exists as a user node
  */
 private static boolean usable(Node<?> n) {
   return n != null && !n.isSpecial();
 }
 /**
  * Throws NullPointerException if argument is null
  * 
  * @param v
  *            the element
  */
 private static void checkNullArg(Object v) {
   if (v == null)
     throw new NullPointerException();
 }
 /**
  * Returns element unless it is null, in which case throws
  * NoSuchElementException.
  * 
  * @param v
  *            the element
  * @return the element
  */
 private E screenNullResult(E v) {
   if (v == null)
     throw new NoSuchElementException();
   return v;
 }
 /**
  * Creates an array list and fills it with elements of this list. Used by
  * toArray.
  * 
  * @return the arrayList
  */
 private ArrayList<E> toArrayList() {
   ArrayList<E> c = new ArrayList<E>();
   for (Node<E> n = header.forward(); n != null; n = n.forward())
     c.add(n.element);
   return c;
 }
 // Fields and constructors
 private static final long serialVersionUID = 876323262645176354L;
 /**
  * List header. First usable node is at header.forward().
  */
 private final Node<E> header;
 /**
  * List trailer. Last usable node is at trailer.back().
  */
 private final Node<E> trailer;
 /**
  * Constructs an empty deque.
  */
 public ConcurrentDoublyLinkedList() {
   Node h = new Node(null, null, null);
   Node t = new Node(null, null, h);
   h.setNext(t);
   header = h;
   trailer = t;
 }
 /**
  * Constructs a deque containing the elements of the specified collection,
  * in the order they are returned by the collection"s iterator.
  * 
  * @param c
  *            the collection whose elements are to be placed into this
  *            deque.
  * @throws NullPointerException
  *             if c or any element within it is null
  */
 public ConcurrentDoublyLinkedList(Collection<? extends E> c) {
   this();
   addAll(c);
 }
 /**
  * Prepends the given element at the beginning of this deque.
  * 
  * @param o
  *            the element to be inserted at the beginning of this deque.
  * @throws NullPointerException
  *             if the specified element is null
  */
 public void addFirst(E o) {
   checkNullArg(o);
   while (header.append(o) == null)
     ;
 }
 /**
  * Appends the given element to the end of this deque. This is identical in
  * function to the add method.
  * 
  * @param o
  *            the element to be inserted at the end of this deque.
  * @throws NullPointerException
  *             if the specified element is null
  */
 public void addLast(E o) {
   checkNullArg(o);
   while (trailer.prepend(o) == null)
     ;
 }
 /**
  * Prepends the given element at the beginning of this deque.
  * 
  * @param o
  *            the element to be inserted at the beginning of this deque.
  * @return true always
  * @throws NullPointerException
  *             if the specified element is null
  */
 public boolean offerFirst(E o) {
   addFirst(o);
   return true;
 }
 /**
  * Appends the given element to the end of this deque. (Identical in
  * function to the add method; included only for consistency.)
  * 
  * @param o
  *            the element to be inserted at the end of this deque.
  * @return true always
  * @throws NullPointerException
  *             if the specified element is null
  */
 public boolean offerLast(E o) {
   addLast(o);
   return true;
 }
 /**
  * Retrieves, but does not remove, the first element of this deque, or
  * returns null if this deque is empty.
  * 
  * @return the first element of this queue, or null if empty.
  */
 public E peekFirst() {
   Node<E> n = header.successor();
   return (n == null) ? null : n.element;
 }
 /**
  * Retrieves, but does not remove, the last element of this deque, or
  * returns null if this deque is empty.
  * 
  * @return the last element of this deque, or null if empty.
  */
 public E peekLast() {
   Node<E> n = trailer.predecessor();
   return (n == null) ? null : n.element;
 }
 /**
  * Returns the first element in this deque.
  * 
  * @return the first element in this deque.
  * @throws NoSuchElementException
  *             if this deque is empty.
  */
 public E getFirst() {
   return screenNullResult(peekFirst());
 }
 /**
  * Returns the last element in this deque.
  * 
  * @return the last element in this deque.
  * @throws NoSuchElementException
  *             if this deque is empty.
  */
 public E getLast() {
   return screenNullResult(peekLast());
 }
 /**
  * Retrieves and removes the first element of this deque, or returns null if
  * this deque is empty.
  * 
  * @return the first element of this deque, or null if empty.
  */
 public E pollFirst() {
   for (;;) {
     Node<E> n = header.successor();
     if (!usable(n))
       return null;
     if (n.delete())
       return n.element;
   }
 }
 /**
  * Retrieves and removes the last element of this deque, or returns null if
  * this deque is empty.
  * 
  * @return the last element of this deque, or null if empty.
  */
 public E pollLast() {
   for (;;) {
     Node<E> n = trailer.predecessor();
     if (!usable(n))
       return null;
     if (n.delete())
       return n.element;
   }
 }
 /**
  * Removes and returns the first element from this deque.
  * 
  * @return the first element from this deque.
  * @throws NoSuchElementException
  *             if this deque is empty.
  */
 public E removeFirst() {
   return screenNullResult(pollFirst());
 }
 /**
  * Removes and returns the last element from this deque.
  * 
  * @return the last element from this deque.
  * @throws NoSuchElementException
  *             if this deque is empty.
  */
 public E removeLast() {
   return screenNullResult(pollLast());
 }
 // *** Queue and stack methods ***
 public boolean offer(E e) {
   return offerLast(e);
 }
 public boolean add(E e) {
   return offerLast(e);
 }
 public E poll() {
   return pollFirst();
 }
 public E remove() {
   return removeFirst();
 }
 public E peek() {
   return peekFirst();
 }
 public E element() {
   return getFirst();
 }
 public void push(E e) {
   addFirst(e);
 }
 public E pop() {
   return removeFirst();
 }
 /**
  * Removes the first element e such that o.equals(e),
  * if such an element exists in this deque. If the deque does not contain
  * the element, it is unchanged.
  * 
  * @param o
  *            element to be removed from this deque, if present.
  * @return true if the deque contained the specified element.
  * @throws NullPointerException
  *             if the specified element is null
  */
 public boolean removeFirstOccurrence(Object o) {
   checkNullArg(o);
   for (;;) {
     Node<E> n = header.forward();
     for (;;) {
       if (n == null)
         return false;
       if (o.equals(n.element)) {
         if (n.delete())
           return true;
         else
           break; // restart if interference
       }
       n = n.forward();
     }
   }
 }
 /**
  * Removes the last element e such that o.equals(e),
  * if such an element exists in this deque. If the deque does not contain
  * the element, it is unchanged.
  * 
  * @param o
  *            element to be removed from this deque, if present.
  * @return true if the deque contained the specified element.
  * @throws NullPointerException
  *             if the specified element is null
  */
 public boolean removeLastOccurrence(Object o) {
   checkNullArg(o);
   for (;;) {
     Node<E> s = trailer;
     for (;;) {
       Node<E> n = s.back();
       if (s.isDeleted() || (n != null && n.successor() != s))
         break; // restart if pred link is suspect.
       if (n == null)
         return false;
       if (o.equals(n.element)) {
         if (n.delete())
           return true;
         else
           break; // restart if interference
       }
       s = n;
     }
   }
 }
 /**
  * Returns true if this deque contains at least one element
  * e such that o.equals(e).
  * 
  * @param o
  *            element whose presence in this deque is to be tested.
  * @return true if this deque contains the specified element.
  */
 public boolean contains(Object o) {
   if (o == null)
     return false;
   for (Node<E> n = header.forward(); n != null; n = n.forward())
     if (o.equals(n.element))
       return true;
   return false;
 }
 /**
  * Returns true if this collection contains no elements.
  * <p>
  * 
  * @return true if this collection contains no elements.
  */
 public boolean isEmpty() {
   return !usable(header.successor());
 }
 /**
  * Returns the number of elements in this deque. If this deque contains more
  * than Integer.MAX_VALUE elements, it returns
  * Integer.MAX_VALUE.
  * 
  * <p>
  * Beware that, unlike in most collections, this method is NOT a
  * constant-time operation. Because of the asynchronous nature of these
  * deques, determining the current number of elements requires traversing
  * them all to count them. Additionally, it is possible for the size to
  * change during execution of this method, in which case the returned result
  * will be inaccurate. Thus, this method is typically not very useful in
  * concurrent applications.
  * 
  * @return the number of elements in this deque.
  */
 public int size() {
   long count = 0;
   for (Node<E> n = header.forward(); n != null; n = n.forward())
     ++count;
   return (count >= Integer.MAX_VALUE) ? Integer.MAX_VALUE : (int) count;
 }
 /**
  * Removes the first element e such that o.equals(e),
  * if such an element exists in this deque. If the deque does not contain
  * the element, it is unchanged.
  * 
  * @param o
  *            element to be removed from this deque, if present.
  * @return true if the deque contained the specified element.
  * @throws NullPointerException
  *             if the specified element is null
  */
 public boolean remove(Object o) {
   return removeFirstOccurrence(o);
 }
 /**
  * Appends all of the elements in the specified collection to the end of
  * this deque, in the order that they are returned by the specified
  * collection"s iterator. The behavior of this operation is undefined if the
  * specified collection is modified while the operation is in progress.
  * (This implies that the behavior of this call is undefined if the
  * specified Collection is this deque, and this deque is nonempty.)
  * 
  * @param c
  *            the elements to be inserted into this deque.
  * @return true if this deque changed as a result of the call.
  * @throws NullPointerException
  *             if c or any element within it is null
  */
 public boolean addAll(Collection<? extends E> c) {
   Iterator<? extends E> it = c.iterator();
   if (!it.hasNext())
     return false;
   do {
     addLast(it.next());
   } while (it.hasNext());
   return true;
 }
 /**
  * Removes all of the elements from this deque.
  */
 public void clear() {
   while (pollFirst() != null)
     ;
 }
 /**
  * Returns an array containing all of the elements in this deque in the
  * correct order.
  * 
  * @return an array containing all of the elements in this deque in the
  *         correct order.
  */
 public Object[] toArray() {
   return toArrayList().toArray();
 }
 /**
  * Returns an array containing all of the elements in this deque in the
  * correct order; the runtime type of the returned array is that of the
  * specified array. If the deque fits in the specified array, it is returned
  * therein. Otherwise, a new array is allocated with the runtime type of the
  * specified array and the size of this deque.
  * <p>
  * 
  * If the deque fits in the specified array with room to spare (i.e., the
  * array has more elements than the deque), the element in the array
  * immediately following the end of the collection is set to null. This is
  * useful in determining the length of the deque only if the caller
  * knows that the deque does not contain any null elements.
  * 
  * @param a
  *            the array into which the elements of the deque are to be
  *            stored, if it is big enough; otherwise, a new array of the
  *            same runtime type is allocated for this purpose.
  * @return an array containing the elements of the deque.
  * @throws ArrayStoreException
  *             if the runtime type of a is not a supertype of the runtime
  *             type of every element in this deque.
  * @throws NullPointerException
  *             if the specified array is null.
  */
 public <T> T[] toArray(T[] a) {
   return toArrayList().toArray(a);
 }
 /**
  * Returns a weakly consistent iterator over the elements in this deque, in
  * first-to-last order. The next method returns elements
  * reflecting the state of the deque at some point at or since the creation
  * of the iterator. The method does not throw
  * {@link ConcurrentModificationException}, and may proceed concurrently
  * with other operations.
  * 
  * @return an iterator over the elements in this deque
  */
 public Iterator<E> iterator() {
   return new CLDIterator();
 }
 final class CLDIterator implements Iterator<E> {
   Node<E> last;
   Node<E> next = header.forward();
   public boolean hasNext() {
     return next != null;
   }
   public E next() {
     Node<E> l = last = next;
     if (l == null)
       throw new NoSuchElementException();
     next = next.forward();
     return l.element;
   }
   public void remove() {
     Node<E> l = last;
     if (l == null)
       throw new IllegalStateException();
     while (!l.delete() && !l.isDeleted())
       ;
   }
 }

}

/**

* Linked Nodes. As a minor efficiency hack, this class opportunistically
* inherits from AtomicReference, with the atomic ref used as the "next"
* link.
* 
* Nodes are in doubly-linked lists. There are three kinds of special nodes,
* distinguished by: * The list header has a null prev link * The list
* trailer has a null next link * A deletion marker has a prev link pointing
* to itself. All three kinds of special nodes have null element fields.
* 
* Regular nodes have non-null element, next, and prev fields. To avoid
* visible inconsistencies when deletions overlap element replacement,
* replacements are done by replacing the node, not just setting the
* element.
* 
* Nodes can be traversed by read-only ConcurrentLinkedDeque class
* operations just by following raw next pointers, so long as they ignore
* any special nodes seen along the way. (This is automated in method
* forward.) However, traversal using prev pointers is not guaranteed to see
* all live nodes since a prev pointer of a deleted node can become
* unrecoverably stale.
*/
class Node<E> extends AtomicReference<Node<E>> {
 private volatile Node<E> prev;
 final E element;
 /** Creates a node with given contents */
 Node(E element, Node<E> next, Node<E> prev) {
   super(next);
   this.prev = prev;
   this.element = element;
 }
 /** Creates a marker node with given successor */
 Node(Node<E> next) {
   super(next);
   this.prev = this;
   this.element = null;
 }
 /**
  * Gets next link (which is actually the value held as atomic
  * reference).
  */
 private Node<E> getNext() {
   return get();
 }
 /**
  * Sets next link
  * 
  * @param n
  *            the next node
  */
 void setNext(Node<E> n) {
   set(n);
 }
 /**
  * compareAndSet next link
  */
 private boolean casNext(Node<E> cmp, Node<E> val) {
   return compareAndSet(cmp, val);
 }
 /**
  * Gets prev link
  */
 private Node<E> getPrev() {
   return prev;
 }
 /**
  * Sets prev link
  * 
  * @param b
  *            the previous node
  */
 void setPrev(Node<E> b) {
   prev = b;
 }
 /**
  * Returns true if this is a header, trailer, or marker node
  */
 boolean isSpecial() {
   return element == null;
 }
 /**
  * Returns true if this is a trailer node
  */
 boolean isTrailer() {
   return getNext() == null;
 }
 /**
  * Returns true if this is a header node
  */
 boolean isHeader() {
   return getPrev() == null;
 }
 /**
  * Returns true if this is a marker node
  */
 boolean isMarker() {
   return getPrev() == this;
 }
 /**
  * Returns true if this node is followed by a marker, meaning that it is
  * deleted.
  * 
  * @return true if this node is deleted
  */
 boolean isDeleted() {
   Node<E> f = getNext();
   return f != null && f.isMarker();
 }
 /**
  * Returns next node, ignoring deletion marker
  */
 private Node<E> nextNonmarker() {
   Node<E> f = getNext();
   return (f == null || !f.isMarker()) ? f : f.getNext();
 }
 /**
  * Returns the next non-deleted node, swinging next pointer around any
  * encountered deleted nodes, and also patching up successor""s prev
  * link to point back to this. Returns null if this node is trailer so
  * has no successor.
  * 
  * @return successor, or null if no such
  */
 Node<E> successor() {
   Node<E> f = nextNonmarker();
   for (;;) {
     if (f == null)
       return null;
     if (!f.isDeleted()) {
       if (f.getPrev() != this && !isDeleted())
         f.setPrev(this); // relink f"s prev
       return f;
     }
     Node<E> s = f.nextNonmarker();
     if (f == getNext())
       casNext(f, s); // unlink f
     f = s;
   }
 }
 /**
  * Returns the apparent predecessor of target by searching forward for
  * it starting at this node, patching up pointers while traversing. Used
  * by predecessor().
  * 
  * @return target"s predecessor, or null if not found
  */
 private Node<E> findPredecessorOf(Node<E> target) {
   Node<E> n = this;
   for (;;) {
     Node<E> f = n.successor();
     if (f == target)
       return n;
     if (f == null)
       return null;
     n = f;
   }
 }
 /**
  * Returns the previous non-deleted node, patching up pointers as
  * needed. Returns null if this node is header so has no successor. May
  * also return null if this node is deleted, so doesn"t have a distinct
  * predecessor.
  * 
  * @return predecessor or null if not found
  */
 Node<E> predecessor() {
   Node<E> n = this;
   for (;;) {
     Node<E> b = n.getPrev();
     if (b == null)
       return n.findPredecessorOf(this);
     Node<E> s = b.getNext();
     if (s == this)
       return b;
     if (s == null || !s.isMarker()) {
       Node<E> p = b.findPredecessorOf(this);
       if (p != null)
         return p;
     }
     n = b;
   }
 }
 /**
  * Returns the next node containing a nondeleted user element. Use for
  * forward list traversal.
  * 
  * @return successor, or null if no such
  */
 Node<E> forward() {
   Node<E> f = successor();
   return (f == null || f.isSpecial()) ? null : f;
 }
 /**
  * Returns previous node containing a nondeleted user element, if
  * possible. Use for backward list traversal, but beware that if this
  * method is called from a deleted node, it might not be able to
  * determine a usable predecessor.
  * 
  * @return predecessor, or null if no such could be found
  */
 Node<E> back() {
   Node<E> f = predecessor();
   return (f == null || f.isSpecial()) ? null : f;
 }
 /**
  * Tries to insert a node holding element as successor, failing if this
  * node is deleted.
  * 
  * @param element
  *            the element
  * @return the new node, or null on failure.
  */
 Node<E> append(E element) {
   for (;;) {
     Node<E> f = getNext();
     if (f == null || f.isMarker())
       return null;
     Node<E> x = new Node<E>(element, f, this);
     if (casNext(f, x)) {
       f.setPrev(x); // optimistically link
       return x;
     }
   }
 }
 /**
  * Tries to insert a node holding element as predecessor, failing if no
  * live predecessor can be found to link to.
  * 
  * @param element
  *            the element
  * @return the new node, or null on failure.
  */
 Node<E> prepend(E element) {
   for (;;) {
     Node<E> b = predecessor();
     if (b == null)
       return null;
     Node<E> x = new Node<E>(element, this, b);
     if (b.casNext(this, x)) {
       setPrev(x); // optimistically link
       return x;
     }
   }
 }
 /**
  * Tries to mark this node as deleted, failing if already deleted or if
  * this node is header or trailer
  * 
  * @return true if successful
  */
 boolean delete() {
   Node<E> b = getPrev();
   Node<E> f = getNext();
   if (b != null && f != null && !f.isMarker()
       && casNext(f, new Node(f))) {
     if (b.casNext(this, f))
       f.setPrev(b);
     return true;
   }
   return false;
 }
 /**
  * Tries to insert a node holding element to replace this node. failing
  * if already deleted.
  * 
  * @param newElement
  *            the new element
  * @return the new node, or null on failure.
  */
 Node<E> replace(E newElement) {
   for (;;) {
     Node<E> b = getPrev();
     Node<E> f = getNext();
     if (b == null || f == null || f.isMarker())
       return null;
     Node<E> x = new Node<E>(newElement, f, b);
     if (casNext(f, new Node(x))) {
       b.successor(); // to relink b
       x.successor(); // to relink f
       return x;
     }
   }
 }

}


 </source>
   
  
 
  



Synchronized Queue

   <source lang="java">
 

/*

* Copyright (c) 2003 - 2007 OpenSubsystems s.r.o. Slovak Republic. All rights reserved.
* 
* Project: OpenSubsystems
* 
* $Id: SynchronizedQueue.java,v 1.4 2007/01/07 06:14:00 bastafidli Exp $
* 
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 2 of the License. 
* 
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
* GNU General Public License for more details.
* 
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA 
*/

import java.util.LinkedList; import java.util.List; /**

* Class that implement unlimited queue, that is synchronized. It means that
* the consumer of the objects from the queue waits/is blocked in the get 
* method until there is an object available.
*
* @version $Id: SynchronizedQueue.java,v 1.4 2007/01/07 06:14:00 bastafidli Exp $
* @author Miro Halas
* @code.reviewer Miro Halas
* @code.reviewed Initial revision
*/

public class SynchronizedQueue {

  // Attributes ///////////////////////////////////////////////////////////////
  
  /**
   * Cache of object produced by producer and consumed by consumer.
   */
  protected List m_lstObjects;
  // Constructors /////////////////////////////////////////////////////////////
  
  /**
   * Constructor for Synchronized Queue Object.
   */
  public SynchronizedQueue(
  )
  {
     super();
     m_lstObjects = new LinkedList();
  }
  // Logic ////////////////////////////////////////////////////////////////////
  
  /**
   * Destructor for Synchronized Queue. It is called when no other
   * object holds reference to it.
   *
   * @exception Throwable - default destructor exception
   */
  protected void finalize(
  ) throws Throwable
  {
     // Explicitely remove this just to help garbage collector
     m_lstObjects.clear();
     m_lstObjects = null;
     super.finalize();
  }
  /**
   * Get the object from the beginning of the queue
   *
   * @return Object - object from the queue, if the thread is blocked in this
   *                  function and you call interrupt method, an InterruptedException
   *                  will be thrown.
   * @exception InterruptedException - if the thread is blocked in this
   *                                   function and you call interrupt method,
   *                                   an InterruptedException will be thrown.
   */
  public synchronized Object get(
  ) throws InterruptedException
  {
     Object objReturn = null;
     if (m_lstObjects.isEmpty())
     {
        // There is no object in the queue, go to sleep
        try
        {
           wait();
        }
        catch (InterruptedException ieException)
        {
           // Somebody woke us up, that means all threads waiting on this
           // object competed for the lock and this one won and the object is
           // locked again
           // The thread can be woken up in two conditions, producer put new
           // object into the queue or somebody called interrupt - to interrupt
           // the wait - in this case rethrow an exception
           if (m_lstObjects.isEmpty())
           {
              throw ieException;
           }
        }
     }
     // Remove the first object in the queue
     objReturn = m_lstObjects.remove(0);
     return objReturn;
  }
  /**
   * Put the object to the end of the queue.
   *
   * @param objNew - new object, can be null
   */
  public synchronized void put(
     Object objNew
  )
  {
     m_lstObjects.add(objNew);
     // New object in the queue, notify others
     notifyAll();
  }
  /**
   * Test if the queue is empty.
   *
   * @return boolean - true if the queue is empty
   */
  public synchronized boolean isEmpty(
  )
  {
     return m_lstObjects.isEmpty();
  }

}


 </source>