java-HashMap和HashSet源码分析

  • HashMap源码分析

    HashMap源码部分涉及的知识点有:
    1、对象的序列化和反序列化
    2、关键字transient
    3、clone方法 浅拷贝 深拷贝问题
    4、链表问题
    5、断言
    废话不多说,先看源码

package java.util;
import java.io.*;
/** @param <K> the type of keys maintained by this map * @param <V> the type of mapped values * * @author Doug Lea * @author Josh Bloch * @author Arthur van Hoff * @author Neal Gafter * @see Object#hashCode() * @see Collection * @see Map * @see TreeMap * @see Hashtable * @since 1.2 */

public class HashMap<K,V>
    extends AbstractMap<K,V>
    implements Map<K,V>, Cloneable, Serializable
{

    /** * The default initial capacity - MUST be a power of two. * 默认的初始容量 */
    static final int DEFAULT_INITIAL_CAPACITY = 16;

    /** * The maximum capacity, used if a higher value is implicitly specified * by either of the constructors with arguments. * MUST be a power of two <= 1<<30. * 默认最大容量 1左移30位 一般没有人会设置这么大的容量 */
    static final int MAXIMUM_CAPACITY = 1 << 30;

    /** * The load factor used when none specified in constructor. * 负载因子 一般不改变,采用默认的 */
    static final float DEFAULT_LOAD_FACTOR = 0.75f;

    /** * The table, resized as necessary. Length MUST Always be a power of two. * Entry数组 * transient关键字: * 当持久化对象时,可能有一个特殊的对象数据成员, * 我们不想 用serialization机制来保存它。为了在一个特定对象的一个域上关闭serialization, * 可以在这个域前加上关键字transient。 * transient是Java语言的关键字,用来表示一个域不是该对象串行化的一部分。 * 当一个对象被串行化的时候, 然而非transient型的变量是被包括进去的。 * 注意static变量也是可以串行化的 * 同时,通过反序列化得到的对象是不同的对象, * 而且得到的对象不是通过构造器得到的, * 也就是说反序列化得到的对象不执行构造器。 * 我的博文:http://blog.csdn.net/u010156024/article/details/48345257 */
    transient Entry<K,V>[] table;

    /** * The number of key-value mappings contained in this map. */
    transient int size;

    /** * The next size value at which to resize (capacity * load factor). * @serial * 极限容量 */
    int threshold;

    /** * The load factor for the hash table. * 负载因子 * @serial */
    final float loadFactor;

    /** * The number of times this HashMap has been structurally modified * Structural modifications are those that change the number of mappings in * the HashMap or otherwise modify its internal structure (e.g., * rehash). This field is used to make iterators on Collection-views of * the HashMap fail-fast. (See ConcurrentModificationException). */
    transient int modCount;

    /** * The default threshold of map capacity above which alternative hashing is * used for String keys. Alternative hashing reduces the incidence of * collisions due to weak hash code calculation for String keys. * <p/> * This value may be overridden by defining the system property * {@code jdk.map.althashing.threshold}. A property value of {@code 1} * forces alternative hashing to be used at all times whereas * {@code -1} value ensures that alternative hashing is never used. 极限容量最大值 */
    static final int ALTERNATIVE_HASHING_THRESHOLD_DEFAULT = Integer.MAX_VALUE;

    /** * holds values which can't be initialized until after VM is booted. */
    private static class Holder {

            // Unsafe mechanics
        /** * Unsafe utilities */
        static final sun.misc.Unsafe UNSAFE;

        /** * Offset of "final" hashSeed field we must set in readObject() method. */
        static final long HASHSEED_OFFSET;

        /** * Table capacity above which to switch to use alternative hashing. */
        static final int ALTERNATIVE_HASHING_THRESHOLD;

        static {
            String altThreshold = java.security.AccessController.doPrivileged(
                new sun.security.action.GetPropertyAction(
                    "jdk.map.althashing.threshold"));

            int threshold;
            try {
                threshold = (null != altThreshold)
                        ? Integer.parseInt(altThreshold)
                        : ALTERNATIVE_HASHING_THRESHOLD_DEFAULT;

                // disable alternative hashing if -1
                if (threshold == -1) {
                    threshold = Integer.MAX_VALUE;
                }

                if (threshold < 0) {
                    throw new IllegalArgumentException("value must be positive integer.");
                }
            } catch(IllegalArgumentException failed) {
                throw new Error("Illegal value for 'jdk.map.althashing.threshold'", failed);
            }
            ALTERNATIVE_HASHING_THRESHOLD = threshold;

            try {
                UNSAFE = sun.misc.Unsafe.getUnsafe();
                HASHSEED_OFFSET = UNSAFE.objectFieldOffset(
                    HashMap.class.getDeclaredField("hashSeed"));
            } catch (NoSuchFieldException | SecurityException e) {
                throw new Error("Failed to record hashSeed offset", e);
            }
        }
    }

    /** * If {@code true} then perform alternative hashing of String keys to reduce * the incidence of collisions due to weak hash code calculation. */
    transient boolean useAltHashing;

    /** * A randomizing value associated with this instance that is applied to * hash code of keys to make hash collisions harder to find. */
    transient final int hashSeed = sun.misc.Hashing.randomHashSeed(this);

    /** * Constructs an empty <tt>HashMap</tt> with the specified initial * capacity and load factor. * * @param initialCapacity the initial capacity * @param loadFactor the load factor * @throws IllegalArgumentException if the initial capacity is negative * or the load factor is nonpositive * HashMap的最大容量 只能是MAXIMUM_CAPACITY。 * 参考初始化和resize方法中容量的设置。 */
    public HashMap(int initialCapacity, float loadFactor) {
        if (initialCapacity < 0)
            throw new IllegalArgumentException("Illegal initial capacity: " +
                                               initialCapacity);
        if (initialCapacity > MAXIMUM_CAPACITY)
            initialCapacity = MAXIMUM_CAPACITY;
        if (loadFactor <= 0 || Float.isNaN(loadFactor))
            throw new IllegalArgumentException("Illegal load factor: " +
                                               loadFactor);

        // Find a power of 2 >= initialCapacity
/* 很多人都有这个疑问,为什么hashmap的数组初始化大小都是2的次方大小时, hashmap的效率最高,我以2的4次方举例 ,来解释一下为什么数组大小为2的幂时hashmap访问的性能最高。 左边两组是数组长度为16(2的4次方) ,右边两组是数组长度为15。两组的hashcode均为8和9, 但是很明显,当它们和1110“与”的时候,产生了相同的结果, 也就是说它们会定位到数组中的同一个位置上去,这就产生了碰撞, 8和9会被放到同一个链表上,那么查询的时候就需要遍历这个链表, 得到8或者9,这样就降低了查询的效率。同时,我们也可以发现, 当数组长度为15的时候,hashcode的值会与14(1110)进行“与”, 那么最后一位永远是0,而0001,0011,0101,1001,1011,0111, 1101这几个位置永远都不能存放元素了,空间浪费相当大, 更糟的是这种情况中,数组可以使用的位置比数组长度小了很多, 这意味着进一步增加了碰撞的几率,减慢了查询的效率! */
        int capacity = 1;
        while (capacity < initialCapacity)
            capacity <<= 1;

        this.loadFactor = loadFactor;
        threshold = (int)Math.min(capacity * loadFactor, MAXIMUM_CAPACITY + 1);
        table = new Entry[capacity];
        useAltHashing = sun.misc.VM.isBooted() &&
                (capacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD);
        init();
    }

    /** * Constructs an empty <tt>HashMap</tt> with the specified initial * capacity and the default load factor (0.75). * * @param initialCapacity the initial capacity. * @throws IllegalArgumentException if the initial capacity is negative. * 制定HashMap中table数组长度的初始化操作。 */
    public HashMap(int initialCapacity) {
        this(initialCapacity, DEFAULT_LOAD_FACTOR);
    }

    /** * Constructs an empty <tt>HashMap</tt> with the default initial capacity * (16) and the default load factor (0.75). * 默认情况 table数组长度16,负载因子0.75 */
    public HashMap() {
        this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR);
    }

    /** * Constructs a new <tt>HashMap</tt> with the same mappings as the * specified <tt>Map</tt>. The <tt>HashMap</tt> is created with * default load factor (0.75) and an initial capacity sufficient to * hold the mappings in the specified <tt>Map</tt>. * * @param m the map whose mappings are to be placed in this map * @throws NullPointerException if the specified map is null */
    public HashMap(Map<? extends K, ? extends V> m) {
        this(Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1,
                      DEFAULT_INITIAL_CAPACITY), DEFAULT_LOAD_FACTOR);
        //初始化HashMap对象是调用。该方法不会进行resize的检查。
        putAllForCreate(m);
    }

    // internal utilities

    /** * Initialization hook for subclasses. This method is called * in all constructors and pseudo-constructors (clone, readObject) * after HashMap has been initialized but before any entries have * been inserted. (In the absence of this method, readObject would * require explicit knowledge of subclasses.) */
    void init() {
    }

    /** * Retrieve object hash code and applies a supplemental hash function to the * result hash, which defends against poor quality hash functions. This is * critical because HashMap uses power-of-two length hash tables, that * otherwise encounter collisions for hashCodes that do not differ * in lower bits. Note: Null keys always map to hash 0, thus index 0. * hash值得计算。这个方法调用的是Object类的hashCode()方法,查看 * Object类的hashCode()方法,你会发现方法前有native修饰, * 说明对象的hashCode值是由底层的C、C++计算实现的。 */
    final int hash(Object k) {
        int h = 0;
        if (useAltHashing) {
            if (k instanceof String) {
                return sun.misc.Hashing.stringHash32((String) k);
            }
            h = hashSeed;
        }

        h ^= k.hashCode();

        // This function ensures that hashCodes that differ only by
        // constant multiples at each bit position have a bounded
        // number of collisions (approximately 8 at default load factor).
        h ^= (h >>> 20) ^ (h >>> 12);
        return h ^ (h >>> 7) ^ (h >>> 4);
    }

    /** * Returns index for hash code h. * h是hash值,length是数组容量,该值是2的n次方, * length-1得到的是除符号位为0外其他位全部为1的值。 * &操作方便直接。 */
    static int indexFor(int h, int length) {
        return h & (length-1);
    }

    /** * Returns the number of key-value mappings in this map. * * @return the number of key-value mappings in this map */
    public int size() {
        return size;
    }

    /** * Returns <tt>true</tt> if this map contains no key-value mappings. * * @return <tt>true</tt> if this map contains no key-value mappings */
    public boolean isEmpty() {
        return size == 0;
    }

    /** * Returns the value to which the specified key is mapped, * or {@code null} if this map contains no mapping for the key. * * <p>More formally, if this map contains a mapping from a key * {@code k} to a value {@code v} such that {@code (key==null ? k==null : * key.equals(k))}, then this method returns {@code v}; otherwise * it returns {@code null}. (There can be at most one such mapping.) * * <p>A return value of {@code null} does not <i>necessarily</i> * indicate that the map contains no mapping for the key; it's also * possible that the map explicitly maps the key to {@code null}. * The {@link #containsKey containsKey} operation may be used to * distinguish these two cases. * * @see #put(Object, Object) */
    public V get(Object key) {
        if (key == null)
            return getForNullKey();
        Entry<K,V> entry = getEntry(key);
        return null == entry ? null : entry.getValue();
    }

    /** * Offloaded version of get() to look up null keys. Null keys map * to index 0. This null case is split out into separate methods * for the sake of performance in the two most commonly used * operations (get and put), but incorporated with conditionals in * others. * key=null时,得到对应的value */
    private V getForNullKey() {
        for (Entry<K,V> e = table[0]; e != null; e = e.next) {
            if (e.key == null)
                return e.value;
        }
        return null;
    }

    /** * Returns <tt>true</tt> if this map contains a mapping for the * specified key. * * @param key The key whose presence in this map is to be tested * @return <tt>true</tt> if this map contains a mapping for the specified * key. */
    public boolean containsKey(Object key) {
        return getEntry(key) != null;
    }

    /** * Returns the entry associated with the specified key in the * HashMap. Returns null if the HashMap contains no mapping * for the key. * 有key值得到对应的Entry对象。 */
    final Entry<K,V> getEntry(Object key) {
        int hash = (key == null) ? 0 : hash(key);
        for (Entry<K,V> e = table[indexFor(hash, table.length)];
             e != null;
             e = e.next) {
            Object k;
            if (e.hash == hash &&
                ((k = e.key) == key || (key != null && key.equals(k))))
                return e;
        }
        return null;
    }


    /** * Associates the specified value with the specified key in this map. * If the map previously contained a mapping for the key, the old * value is replaced. * * @param key key with which the specified value is to be associated * @param value value to be associated with the specified key * @return the previous value associated with <tt>key</tt>, or * <tt>null</tt> if there was no mapping for <tt>key</tt>. * (A <tt>null</tt> return can also indicate that the map * previously associated <tt>null</tt> with <tt>key</tt>.) * 对外开放的put方法。 * 首先检查如果key=null,则使用 putForNullKey(value)方法 * 向第一个元素的key=null中添加元素。 * key!=null的情况,首先根据key得到hash值, * 有hash值得到数组中的index, * 如果index处不为空,直接用新的value替换原来的value。 * 如果index处为空,则使用addEntry(hash, key, value, i)方法添加。 * addEntry方法会进行resize检查。 */
    public V put(K key, V value) {
        if (key == null)
            return putForNullKey(value);
        int hash = hash(key);
        int i = indexFor(hash, table.length);
        for (Entry<K,V> e = table[i]; e != null; e = e.next) {
            Object k;
            if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
                V oldValue = e.value;
                e.value = value;
                e.recordAccess(this);
                return oldValue;
            }
        }

        modCount++;
        addEntry(hash, key, value, i);
        return null;
    }

    /** * Offloaded version of put for null keys * 向key=null的value中添加数据,value回替换原来的value值。 * addEntry(0, null, value, 0);其中的null就是key * 也就是说集合中可以添加key=null的key-value对。 * 只不过,只能添加一个key=null的值,以后所以key=null的value值 * 都会被后来添加的value替换。 * 这里有个小问题,如果循环一遍,发现没有key=null的情况,也就是说for循环 * 中的if语句没有执行,那么会执行for循环后面的addEntry方法,执行到该方法 * 说明table数组中元素都不为null,则会在addEntry方法中进行扩容,符合扩容的 * 条件。然后使用新的table数组,把原来的table中的数据全部转移到新的table数组 * 中,然后在调用createEntry把key=null的元素添加进来。 * 猜想,新的table数组容量较大,肯定会有key=null的情况,所以 * 完全可以添加进来。 */
    private V putForNullKey(V value) {
        for (Entry<K,V> e = table[0]; e != null; e = e.next) {
            if (e.key == null) {
                V oldValue = e.value;
                e.value = value;
                e.recordAccess(this);
                return oldValue;
            }
        }
        modCount++;
        addEntry(0, null, value, 0);
        return null;
    }

    /** * This method is used instead of put by constructors and * pseudoconstructors (clone, readObject). It does not resize the table, * check for comodification, etc. It calls createEntry rather than * addEntry. * 把参数key-value 添加到集合中。 * 这个方法不会进行resize的检查操作,非常快。 * 应用在clone方法和使用集合初始化HashMap这两个地方。 */
    private void putForCreate(K key, V value) {
        int hash = null == key ? 0 : hash(key);
        int i = indexFor(hash, table.length);

        /** * Look for preexisting entry for key. This will never happen for * clone or deserialize. It will only happen for construction if the * input Map is a sorted map whose ordering is inconsistent w/ equals. * clone和反序列化时候,这个循环不会发生。 * 原因在于this.table[i]还没有值呢 e==null * 其中,这里的this代表的是调用该方法的对象。 */
        for (Entry<K,V> e = table[i]; e != null; e = e.next) {
            Object k;
            if (e.hash == hash &&
                ((k = e.key) == key || (key != null && key.equals(k)))) {
                e.value = value;
                return;
            }
        }
        //该方法不需要进行resize的判断。
        createEntry(hash, key, value, i);
    }
    /**讲参数m中的元素添加到调用该方法的对象中。 * 这个方法不会进行resize的检查,相当高效 * 主要使用的地方在clone方法和利用HashMap进行初始化这两个地方。 * 原因在于clone方法和HashMap进行创建HashMap的对象的时候,元素的数量已知, * 初始化已经完成,数组已经得到初始化操作,此时只用向集合中添加数组元素即可。 * 所以此时不用resize的检查,所以高效。 */
    private void putAllForCreate(Map<? extends K, ? extends V> m) {
        for (Map.Entry<? extends K, ? extends V> e : m.entrySet())
            putForCreate(e.getKey(), e.getValue());
    }

    /** * Rehashes the contents of this map into a new array with a * larger capacity. This method is called automatically when the * number of keys in this map reaches its threshold. * * If current capacity is MAXIMUM_CAPACITY, this method does not * resize the map, but sets threshold to Integer.MAX_VALUE. * This has the effect of preventing future calls. * * @param newCapacity the new capacity, MUST be a power of two; * must be greater than current capacity unless current * capacity is MAXIMUM_CAPACITY (in which case value * is irrelevant). * 重新分配数组长度。 * HashMap的最大容量 只能是MAXIMUM_CAPACITY。 * 参考初始化和resize方法中容量的设置。 */
    void resize(int newCapacity) {
        Entry[] oldTable = table;
        int oldCapacity = oldTable.length;
        //MAXIMUM_CAPACITY的值是1<<30的int类型值
        if (oldCapacity == MAXIMUM_CAPACITY) {
            threshold = Integer.MAX_VALUE;
            return;
        }

        Entry[] newTable = new Entry[newCapacity];
        boolean oldAltHashing = useAltHashing;
        useAltHashing |= sun.misc.VM.isBooted() &&
                (newCapacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD);
        boolean rehash = oldAltHashing ^ useAltHashing;
        transfer(newTable, rehash);
        table = newTable;
        threshold = (int)Math.min(newCapacity * loadFactor, MAXIMUM_CAPACITY + 1);
    }

    /** * Transfers all entries from current table to newTable. * 把当前是数组中的元素全部拷贝到新的数组中。 * 主要进行这个操作的是数据扩容之后进行。 * 由此可以看出这里是影响HashMap性能的地方。 */
    void transfer(Entry[] newTable, boolean rehash) {
        int newCapacity = newTable.length;
        for (Entry<K,V> e : table) {
            while(null != e) {
                Entry<K,V> next = e.next;
                if (rehash) {
                    e.hash = null == e.key ? 0 : hash(e.key);
                }
                int i = indexFor(e.hash, newCapacity);
                e.next = newTable[i];
                newTable[i] = e;
                e = next;
            }
        }
    }

    /** * Copies all of the mappings from the specified map to this map. * These mappings will replace any mappings that this map had for * any of the keys currently in the specified map. * * @param m mappings to be stored in this map * @throws NullPointerException if the specified map is null * 把参数m集合中的元素全部添加到调用该方法的对象中。 * 其中参数m!=null,否则引发异常NullPointerException * 添加过程中,会重新检查容量是否够用,如果不够用 * 则按照新的数量进行扩容。 */
    public void putAll(Map<? extends K, ? extends V> m) {
        int numKeysToBeAdded = m.size();
        if (numKeysToBeAdded == 0)
            return;

        /* * Expand the map if the map if the number of mappings to be added * is greater than or equal to threshold. This is conservative; the * obvious condition is (m.size() + size) >= threshold, but this * condition could result in a map with twice the appropriate capacity, * if the keys to be added overlap with the keys already in this map. * By using the conservative calculation, we subject ourself * to at most one extra resize. * 这里进行扩容的判断有点小复杂。 * 1、当待添加的集合中的数量不大于极限容量时,不扩容。 * 2、当待添加的集合中的数量大于极限容量时,扩容。 * 此时扩容的长度是数量/负载因子,扩容长度比实际待添加的集合的数量大。 * 最终确定的扩容的长度是newCapacity。 * 此时判断newCapacity是否大于目前数组的长度,如果大于,则扩容, * 如果小于,也不扩容。因为当前数组长度已经够用了。 * 这里面,我刚开始有一个疑惑,为什么不考虑原来的数组的元素的个数? * 原因是:调用此方法是说明原来的数组长度并没有达到极限容量,并且 * 实际扩容的大小也比实际元素的长度大。既使最后原来数组中的数量加上 * 待添加的元素的数量超过了数组容量,也没有问题,因为 put(e.getKey(), e.getValue()); * 进行添加的,这个方法在进行添加元素时,会检查容量极限。 */
        if (numKeysToBeAdded > threshold) {
            int targetCapacity = (int)(numKeysToBeAdded / loadFactor + 1);
            if (targetCapacity > MAXIMUM_CAPACITY)
                targetCapacity = MAXIMUM_CAPACITY;
            int newCapacity = table.length;
            while (newCapacity < targetCapacity)
                newCapacity <<= 1;
            if (newCapacity > table.length)
                resize(newCapacity);
        }

        for (Map.Entry<? extends K, ? extends V> e : m.entrySet())
            put(e.getKey(), e.getValue());
    }

    /** * Removes the mapping for the specified key from this map if present. * * @param key key whose mapping is to be removed from the map * @return the previous value associated with <tt>key</tt>, or * <tt>null</tt> if there was no mapping for <tt>key</tt>. * (A <tt>null</tt> return can also indicate that the map * previously associated <tt>null</tt> with <tt>key</tt>.) */
    public V remove(Object key) {
        Entry<K,V> e = removeEntryForKey(key);
        return (e == null ? null : e.value);
    }

    /** * Removes and returns the entry associated with the specified key * in the HashMap. Returns null if the HashMap contains no mapping * for this key. */
    final Entry<K,V> removeEntryForKey(Object key) {
        int hash = (key == null) ? 0 : hash(key);
        int i = indexFor(hash, table.length);
        Entry<K,V> prev = table[i];
        Entry<K,V> e = prev;
        /** * 循环遍历Entry链 * 如果链中第一个元素就是要删除的,那就table[i]=next即可。 * 如果第一个不是要删除的元素,那就循环遍历entry链,直到 * 找到要删除的元素,然后直接让前一个元素的next指向下一个元素即可表示 * 删除当前元素。 * 即使最后没有找到要删除的元素,退出循环, * 此时e==null,最后返回e即是返回null */
        while (e != null) {
            Entry<K,V> next = e.next;
            Object k;
            if (e.hash == hash &&
                ((k = e.key) == key || (key != null && key.equals(k)))) {
                modCount++;
                size--;
                if (prev == e)
                    table[i] = next;
                else
                    prev.next = next;
                e.recordRemoval(this);
                return e;
            }
            prev = e;
            e = next;
        }
        return e;
    }

    /** * Special version of remove for EntrySet using {@code Map.Entry.equals()} * for matching. * 该方法与上面的removeEntryForKey(Object key)方法雷同。 * 只不过removeEntryForKey(Object key)参数是key, * 而这个方法参数是Map.Entry类或者其子类。 * * 由这个方法可知,HashMap不能根据key-value对中的value进行删除集合中的数据 */
    final Entry<K,V> removeMapping(Object o) {
        if (!(o instanceof Map.Entry))
            return null;

        Map.Entry<K,V> entry = (Map.Entry<K,V>) o;
        Object key = entry.getKey();
        int hash = (key == null) ? 0 : hash(key);
        int i = indexFor(hash, table.length);
        Entry<K,V> prev = table[i];
        Entry<K,V> e = prev;

        while (e != null) {
            Entry<K,V> next = e.next;
            if (e.hash == hash && e.equals(entry)) {
                modCount++;
                size--;
                if (prev == e)
                    table[i] = next;
                else
                    prev.next = next;
                e.recordRemoval(this);
                return e;
            }
            prev = e;
            e = next;
        }
        return e;
    }

    /** * Removes all of the mappings from this map. * The map will be empty after this call returns. */
    public void clear() {
        modCount++;
        Entry[] tab = table;
        for (int i = 0; i < tab.length; i++)
            tab[i] = null;
        size = 0;
    }

    /** * Returns <tt>true</tt> if this map maps one or more keys to the * specified value. * * @param value value whose presence in this map is to be tested * @return <tt>true</tt> if this map maps one or more keys to the * specified value */
    public boolean containsValue(Object value) {
        if (value == null)
            return containsNullValue();

        Entry[] tab = table;
        for (int i = 0; i < tab.length ; i++)
            for (Entry e = tab[i] ; e != null ; e = e.next)
                if (value.equals(e.value))
                    return true;
        return false;
    }

    /** * Special-case code for containsValue with null argument */
    private boolean containsNullValue() {
        Entry[] tab = table;
        for (int i = 0; i < tab.length ; i++)
            for (Entry e = tab[i] ; e != null ; e = e.next)
                if (e.value == null)
                    return true;
        return false;
    }

    /** * Returns a shallow copy of this <tt>HashMap</tt> instance: the keys and * values themselves are not cloned. * * @return a shallow copy of this map * 浅拷贝 引用的拷贝,拷贝之后保存的引用对象是同一个对象 * 深拷贝 不仅引用的拷贝,同时引用的对象的拷贝,拷贝后引用的对象是不同的对象。 */
    public Object clone() {
        HashMap<K,V> result = null;
        try {
            result = (HashMap<K,V>)super.clone();
        } catch (CloneNotSupportedException e) {
        /**此处使用了断言。 * 断言是当assert后表达式为true时,无信息输出 * 当表达式 为false时,输出错误AssertionError,同时,如果 * assert 表达式1:表达式2是这种形式的话,表达式1=false,输出表达式2的信息 * Exception in thread "main" java.lang.AssertionError: 错误 * at volatileTest.main(volatileTest.java:22) */
            // assert false;
        }
        result.table = new Entry[table.length];
        result.entrySet = null;
        result.modCount = 0;
        result.size = 0;
        result.init();
        result.putAllForCreate(this);

        return result;
    }
    //HashMap静态内部类,集成Map.Entry接口
    static class Entry<K,V> implements Map.Entry<K,V> {
        final K key;
        V value;
        Entry<K,V> next;
        int hash;

        /** * Creates new entry. */
        Entry(int h, K k, V v, Entry<K,V> n) {
            value = v;
            next = n;
            key = k;
            hash = h;
        }

        public final K getKey() {
            return key;
        }

        public final V getValue() {
            return value;
        }

        public final V setValue(V newValue) {
            V oldValue = value;
            value = newValue;
            return oldValue;
        }
        /** * 1、如果参数不是Map.Entry类,直接返回false * 2、如果这两者的key-value对相等,则返回true */
        public final boolean equals(Object o) {
            if (!(o instanceof Map.Entry))
                return false;
            Map.Entry e = (Map.Entry)o;
            Object k1 = getKey();
            Object k2 = e.getKey();
            if (k1 == k2 || (k1 != null && k1.equals(k2))) {
                Object v1 = getValue();
                Object v2 = e.getValue();
                if (v1 == v2 || (v1 != null && v1.equals(v2)))
                    return true;
            }
            return false;
        }

        public final int hashCode() {
            return (key==null   ? 0 : key.hashCode()) ^
                   (value==null ? 0 : value.hashCode());
        }

        public final String toString() {
            return getKey() + "=" + getValue();
        }

        /** * This method is invoked whenever the value in an entry is * overwritten by an invocation of put(k,v) for a key k that's already * in the HashMap. */
        void recordAccess(HashMap<K,V> m) {
        }

        /** * This method is invoked whenever the entry is * removed from the table. */
        void recordRemoval(HashMap<K,V> m) {
        }
    }

    /** * Adds a new entry with the specified key, value and hash code to * the specified bucket. It is the responsibility of this * method to resize the table if appropriate. * * Subclass overrides this to alter the behavior of put method. */
     /** * 如果size数量已经大于极限容量,并且出现了冲突,则扩容。 * 如果size数量已经大于极限容量,但是没有出现冲突,不扩容。 * 如果size数量小于极限容量,不管出不出现冲突,均不扩容。 * 此时的冲突用链表解决。 */
    void addEntry(int hash, K key, V value, int bucketIndex) {
        if ((size >= threshold) && (null != table[bucketIndex])) {
            resize(2 * table.length);
            hash = (null != key) ? hash(key) : 0;
            bucketIndex = indexFor(hash, table.length);
        }
        createEntry(hash, key, value, bucketIndex);
    }

    /** * Like addEntry except that this version is used when creating entries * as part of Map construction or "pseudo-construction" (cloning, * deserialization). This version needn't worry about resizing the table. * * Subclass overrides this to alter the behavior of HashMap(Map), * clone, and readObject. */
    void createEntry(int hash, K key, V value, int bucketIndex) {
        Entry<K,V> e = table[bucketIndex];
        table[bucketIndex] = new Entry<>(hash, key, value, e);
        size++;
    }

    private abstract class HashIterator<E> implements Iterator<E> {
        Entry<K,V> next;        // next entry to return
        int expectedModCount;   // For fast-fail
        int index;              // current slot
        Entry<K,V> current;     // current entry

        HashIterator() {
            expectedModCount = modCount;
            if (size > 0) { // advance to first entry
                Entry[] t = table;
                while (index < t.length && (next = t[index++]) == null)
                    ;
            }
        }

        public final boolean hasNext() {
            return next != null;
        }

        final Entry<K,V> nextEntry() {
            if (modCount != expectedModCount)
                throw new ConcurrentModificationException();
            Entry<K,V> e = next;
            if (e == null)
                throw new NoSuchElementException();

            if ((next = e.next) == null) {
                Entry[] t = table;
                while (index < t.length && (next = t[index++]) == null)
                    ;
            }
            current = e;
            return e;
        }

        public void remove() {
            if (current == null)
                throw new IllegalStateException();
            if (modCount != expectedModCount)
                throw new ConcurrentModificationException();
            Object k = current.key;
            current = null;
            HashMap.this.removeEntryForKey(k);
            expectedModCount = modCount;
        }
    }
    /**三个子类 实现上面的HashIterator抽象类,也就是Iterator接口 * 分别遍历value key Entry * 同时,每个迭代器只用重写其中的next()方法即可。 */
    private final class ValueIterator extends HashIterator<V> {
        public V next() {
            return nextEntry().value;
        }
    }

    private final class KeyIterator extends HashIterator<K> {
        public K next() {
            return nextEntry().getKey();
        }
    }

    private final class EntryIterator extends HashIterator<Map.Entry<K,V>> {
        public Map.Entry<K,V> next() {
            return nextEntry();
        }
    }

    // Subclass overrides these to alter behavior of views' iterator() method
    //生成对应的迭代器的方法
    Iterator<K> newKeyIterator()   {
        return new KeyIterator();
    }
    Iterator<V> newValueIterator()   {
        return new ValueIterator();
    }
    Iterator<Map.Entry<K,V>> newEntryIterator()   {
        return new EntryIterator();
    }


    // Views

    private transient Set<Map.Entry<K,V>> entrySet = null;

    /** * Returns a {@link Set} view of the keys contained in this map. * The set is backed by the map, so changes to the map are * reflected in the set, and vice-versa. If the map is modified * while an iteration over the set is in progress (except through * the iterator's own <tt>remove</tt> operation), the results of * the iteration are undefined. The set supports element removal, * which removes the corresponding mapping from the map, via the * <tt>Iterator.remove</tt>, <tt>Set.remove</tt>, * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt> * operations. It does not support the <tt>add</tt> or <tt>addAll</tt> * operations. */
     /**hashmap对象进行调用,生成key的Set集合 * keySet变量的定义是在AbstractMap中定义的。 */
    public Set<K> keySet() {
        Set<K> ks = keySet;
        return (ks != null ? ks : (keySet = new KeySet()));
    }

    private final class KeySet extends AbstractSet<K> {
        public Iterator<K> iterator() {
            return newKeyIterator();
        }
        public int size() {
            return size;
        }
        public boolean contains(Object o) {
            return containsKey(o);
        }
        public boolean remove(Object o) {
            return HashMap.this.removeEntryForKey(o) != null;
        }
        public void clear() {
            HashMap.this.clear();
        }
    }

    /** * Returns a {@link Collection} view of the values contained in this map. * The collection is backed by the map, so changes to the map are * reflected in the collection, and vice-versa. If the map is * modified while an iteration over the collection is in progress * (except through the iterator's own <tt>remove</tt> operation), * the results of the iteration are undefined. The collection * supports element removal, which removes the corresponding * mapping from the map, via the <tt>Iterator.remove</tt>, * <tt>Collection.remove</tt>, <tt>removeAll</tt>, * <tt>retainAll</tt> and <tt>clear</tt> operations. It does not * support the <tt>add</tt> or <tt>addAll</tt> operations. */
    public Collection<V> values() {
        Collection<V> vs = values;
        return (vs != null ? vs : (values = new Values()));
    }

    private final class Values extends AbstractCollection<V> {
        public Iterator<V> iterator() {
            return newValueIterator();
        }
        public int size() {
            return size;
        }
        public boolean contains(Object o) {
            return containsValue(o);
        }
        public void clear() {
            HashMap.this.clear();
        }
    }

    /** * Returns a {@link Set} view of the mappings contained in this map. * The set is backed by the map, so changes to the map are * reflected in the set, and vice-versa. If the map is modified * while an iteration over the set is in progress (except through * the iterator's own <tt>remove</tt> operation, or through the * <tt>setValue</tt> operation on a map entry returned by the * iterator) the results of the iteration are undefined. The set * supports element removal, which removes the corresponding * mapping from the map, via the <tt>Iterator.remove</tt>, * <tt>Set.remove</tt>, <tt>removeAll</tt>, <tt>retainAll</tt> and * <tt>clear</tt> operations. It does not support the * <tt>add</tt> or <tt>addAll</tt> operations. * * @return a set view of the mappings contained in this map * 得到HashMap中所以Entry的Set集合。 */
    public Set<Map.Entry<K,V>> entrySet() {
        return entrySet0();
    }

    private Set<Map.Entry<K,V>> entrySet0() {
        Set<Map.Entry<K,V>> es = entrySet;
        return es != null ? es : (entrySet = new EntrySet());
    }

    private final class EntrySet extends AbstractSet<Map.Entry<K,V>> {
        public Iterator<Map.Entry<K,V>> iterator() {
            return newEntryIterator();
        }
        public boolean contains(Object o) {
            if (!(o instanceof Map.Entry))
                return false;
            Map.Entry<K,V> e = (Map.Entry<K,V>) o;
            Entry<K,V> candidate = getEntry(e.getKey());
            return candidate != null && candidate.equals(e);
        }
        public boolean remove(Object o) {
            return removeMapping(o) != null;
        }
        public int size() {
            return size;
        }
        public void clear() {
            HashMap.this.clear();
        }
    }

    /** * Save the state of the <tt>HashMap</tt> instance to a stream (i.e., * serialize it). * * @serialData The <i>capacity</i> of the HashMap (the length of the * bucket array) is emitted (int), followed by the * <i>size</i> (an int, the number of key-value * mappings), followed by the key (Object) and value (Object) * for each key-value mapping. The key-value mappings are * emitted in no particular order. * 序列化时的方法 */
    private void writeObject(java.io.ObjectOutputStream s)
        throws IOException
    {
        Iterator<Map.Entry<K,V>> i =
            (size > 0) ? entrySet0().iterator() : null;

        // Write out the threshold, loadfactor, and any hidden stuff
        s.defaultWriteObject();

        // Write out number of buckets
        s.writeInt(table.length);

        // Write out size (number of Mappings)
        s.writeInt(size);

        // Write out keys and values (alternating)
        if (size > 0) {
            for(Map.Entry<K,V> e : entrySet0()) {
                s.writeObject(e.getKey());
                s.writeObject(e.getValue());
            }
        }
    }

    private static final long serialVersionUID = 362498820763181265L;

    /** * Reconstitute the {@code HashMap} instance from a stream (i.e., * deserialize it). * 反序列化的方法。 */
    private void readObject(java.io.ObjectInputStream s)
         throws IOException, ClassNotFoundException
    {
        // Read in the threshold (ignored), loadfactor, and any hidden stuff
        s.defaultReadObject();
        if (loadFactor <= 0 || Float.isNaN(loadFactor))
            throw new InvalidObjectException("Illegal load factor: " +
                                               loadFactor);

        // set hashSeed (can only happen after VM boot)
        Holder.UNSAFE.putIntVolatile(this, Holder.HASHSEED_OFFSET,
                sun.misc.Hashing.randomHashSeed(this));

        // Read in number of buckets and allocate the bucket array;
        //读取数组的长度,不过忽略掉了。
        s.readInt(); // ignored

        // Read number of mappings
        //读取数组的元素数量
        int mappings = s.readInt();
        if (mappings < 0)
            throw new InvalidObjectException("Illegal mappings count: " +
                                               mappings);
        //计算数组的初始容量,最后初始容量比个数要大1/3
        int initialCapacity = (int) Math.min(
                // capacity chosen by number of mappings
                // and desired load (if >= 0.25)
                mappings * Math.min(1 / loadFactor, 4.0f),
                // we have limits...
                HashMap.MAXIMUM_CAPACITY);
        int capacity = 1;
        // find smallest power of two which holds all mappings
        //计算最终的数组初始容量。
        while (capacity < initialCapacity) {
            capacity <<= 1;
        }

        table = new Entry[capacity];
        threshold = (int) Math.min(capacity * loadFactor, MAXIMUM_CAPACITY + 1);
        useAltHashing = sun.misc.VM.isBooted() &&
                (capacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD);

        init();  // Give subclass a chance to do its thing.

        // Read the keys and values, and put the mappings in the HashMap
        for (int i=0; i<mappings; i++) {
            K key = (K) s.readObject();
            V value = (V) s.readObject();
            putForCreate(key, value);
        }
    }

    // These methods are used when serializing HashSets
    int   capacity()     { return table.length; }
    float loadFactor()   { return loadFactor;   }
}

以上是HashMap的源码,其中英文注释,并没有去除,方便大家对照着看,几乎上面的而每个方法我都有注释进行了说明,相信大家看的时候应该会明白都是什么意思。
下面是一些总结:
1、对象的序列化其实很简单,主要使用对象输入流ObjectInputStread和对象输出流ObjectOutputStream进行对象的读取和写入操作。这部分请参考我的博文:http://blog.csdn.net/u010156024/article/details/48345257 博文中示例了对象的写入和读取操作。并介绍了关键字transient的意义。

2、clone方法,clone方法也非常简单,使用super.clone()方法得到克隆的对象,然后将得到的克隆对象进行拷贝数据。这里涉及到浅拷贝和深拷贝的问题。
总结一句话,就是,浅拷贝就是值拷贝,引用拷贝。如果是普通的值的话,没什么说的,由于拷贝得到的对象与原始对象是不同的对象,普通值得改变也就改变了,不会影响原来对象的值。但是对于引用对象而言就不同了。浅拷贝之后的对象的引用,依然指向原来对象的引用对象。通过拷贝得到对象改变引用对象的值,会影响到原来对象的引用对象的值。
深拷贝就不存在这个问题了。普通值进行了拷贝,就连引用对象也进行了拷贝,并不引用原始对象的引用对象。可以说,通过深拷贝,得到两个属于同一个类的独立的对象。
3、HashMap内部使用数组存放key-value对。利用hash值计算元素的存取位置,因此效率较高。但是正是由于利用hash值计算存放位置,所以有可能出现冲突:即存放的位置原来已经有值。
java中HashMap的解决冲突的办法是利用链表,table数组中每个元素都是HashMap类中 static class Entry< K, V > implements Map.Entry< K,V> 对象,这个Entry对象中有一个引用next,next类型就是Entry,这样table数组中的每一个对象都可以指向下一个Entry对象形成链表。
即在不冲突的情况下,添加的元素填充数组空间。在冲突的情况下,形成链表。如果数组空间接近沾满,达到极限容量,同时出现了冲突,那就扩容。
4、书中关于HashMap有这么一句话:创建HashMap是有一个默认的负载因子0.75,这是时间和空间成本上的一种折中。增加负载因子可以减少table数组所占用的内存空间,但会增加查询数据的时间开销;减小负载因子会提高数据查询的性能,但会增加table数组所占用的内存空间。
这句话理解起来有点费劲。下面是我个人的理解,如果有误, 欢迎大家批评指正。

第一种情况,增加负载因子,此时数组极限容量增大,table数组未使用的空间减少,相对来说table数组占用内存空间减少,同时由于极限容量增大,数组中存放数据增多,冲突增加,要知道的是,冲突的增加并不会立即扩容,扩容是由条件的,只有当数组数量超过极限容量且出现冲突的情况下才会扩容,那么这种情况下,查询时间开销增加。
第二种情况,减少负载因子,极限容量减少,很容易满足扩容的条件,存放的数据存放散列,查询效率必然提高,但是由于极限容量减少,扩容必然带来内存空间的开销增大。
5 几点说明
对于HashMap的集中操作,增加、删除、查询、遍历等源码中我都有注释给出。查看的时候需要前后结合来看。
需要说明的是,添加操作支持null-null这样的key-value对元素的,不过存放位置并不是table数组的第一个,而是遍历table数组中第一个key=null的元素,并将value替换成最新的value值。也就是说,hashmap最多只能存放一个key=null的元素。
6 HashMap的最大容量时多少,最大容量应该是1<<30,也就是Integer.MAX_VALUE的值的一半。2147483647这就是Integer.MAX_VALUE的大小,也就是说HashMap能存放大约十亿多条数据。为什么说最多只有这么多数据呢?请参看HashMap的初始化和resize()方法的代码,你会发现极限容量最大就是这么多。
7、 Fail-Fast机制:

我们知道java.util.HashMap不是线程安全的,因此如果在使用迭代器的过程中有其他线程修改了map,那么将抛出ConcurrentModificationException,这就是所谓fail-fast策略。

这一策略在源码中的实现是通过modCount域,modCount顾名思义就是修改次数,对HashMap内容的修改都将增加这个值,那么在迭代器初始化过程中会将这个值赋给迭代器的expectedModCount。

HashIterator() {
    expectedModCount = modCount;
    if (size > 0) { // advance to first entry
    Entry[] t = table;
    while (index < t.length && (next = t[index++]) == null)
        ;
    }
}

在迭代过程中,判断modCount跟expectedModCount是否相等,如果不相等就表示已经有其他线程修改了Map:
注意到modCount声明为volatile,保证线程之间修改的可见性。

final Entry<K,V> nextEntry() {   

    if (modCount != expectedModCount)   

        throw new ConcurrentModificationException();

在HashMap的API中指出:
由所有HashMap类的“collection 视图方法”所返回的迭代器都是快速失败的:在迭代器创建之后,如果从结构上对映射进行修改,除非通过迭代器本身的 remove 方法,其他任何时间任何方式的修改,迭代器都将抛出 ConcurrentModificationException。因此,面对并发的修改,迭代器很快就会完全失败,而不冒在将来不确定的时间发生任意不确定行为的风险。

注意,迭代器的快速失败行为不能得到保证,一般来说,存在非同步的并发修改时,不可能作出任何坚决的保证。快速失败迭代器尽最大努力抛出 ConcurrentModificationException。因此,编写依赖于此异常的程序的做法是错误的,正确做法是:迭代器的快速失败行为应该仅用于检测程序错误。(这部分摘自:http://beyond99.blog.51cto.com/1469451/429789/

8、transfer方法详解

/** * Transfers all entries from current table to newTable. * 把当前是数组中的元素全部拷贝到新的数组中。 * 主要进行这个操作的是数据扩容之后进行。 * 由此可以看出这里是影响HashMap性能的地方。 * 程序中的三个赋值操作有点绕: * 首先使用next保存原来数组元素e的链表的下一个元素 * 然后将e.next引用新数组i中保存的元素, * 如果为null则为null,如果不为null,则将先前保存的数据向链表后移 * 然后将新数组i处的元素引用旧数组e元素, * 这样就实现了拷贝原来数组的数据到新数组的功能。 * 最后一句e=next;是进行链表的遍历, * 继续下一个元素继续复制到新数组中。直到while循环结束, * 表明e元素处的链表遍历完毕。 * 然后继续旧数组的下一个元素的遍历复制。 * */
    void transfer(Entry[] newTable, boolean rehash) {
        int newCapacity = newTable.length;
        for (Entry<K,V> e : table) {
            while(null != e) {
                Entry<K,V> next = e.next;
                if (rehash) {
                    e.hash = null == e.key ? 0 : hash(e.key);
                }
                int i = indexFor(e.hash, newCapacity);
                e.next = newTable[i];
                newTable[i] = e;
                e = next;
            }
        }
    }
  • HashSet源码分析
package java.util;

/** * @param <E> the type of elements maintained by this set * * @author Josh Bloch * @author Neal Gafter * @see Collection * @see Set * @see TreeSet * @see HashMap * @since 1.2 */

public class HashSet<E>
    extends AbstractSet<E>
    implements Set<E>, Cloneable, java.io.Serializable
{
    static final long serialVersionUID = -5024744406713321676L;
    /** * HashSet是基于HashMap实现的,这里的map就是HashSet进行包装的对象。 */
    private transient HashMap<E,Object> map;

    // Dummy value to associate with an Object in the backing Map
    /** * Set是对HashMap的包装,HashMap的key的集合就是一个Set集合 * 并且key值不会重复,不需要排序 Set<K> HashMap.keySet()方法返回 * 的就是Set集合。 * 既然是HashMap的包装,value值就是下面的PRESENT对象。 * Set集合中的所有value值就是同一个PRESENT对象。 */
    private static final Object PRESENT = new Object();

    /** * Constructs a new, empty set; the backing <tt>HashMap</tt> instance has * default initial capacity (16) and load factor (0.75). * 构造器,构造默认的HashMap集合。 */
    public HashSet() {
        map = new HashMap<>();
    }

    /** * Constructs a new set containing the elements in the specified * collection. The <tt>HashMap</tt> is created with default load factor * (0.75) and an initial capacity sufficient to contain the elements in * the specified collection. * * @param c the collection whose elements are to be placed into this set * @throws NullPointerException if the specified collection is null */
    public HashSet(Collection<? extends E> c) {
        map = new HashMap<>(Math.max((int) (c.size()/.75f) + 1, 16));
        addAll(c);
    }

    /** * Constructs a new, empty set; the backing <tt>HashMap</tt> instance has * the specified initial capacity and the specified load factor. * * @param initialCapacity the initial capacity of the hash map * @param loadFactor the load factor of the hash map * @throws IllegalArgumentException if the initial capacity is less * than zero, or if the load factor is nonpositive */
    public HashSet(int initialCapacity, float loadFactor) {
        map = new HashMap<>(initialCapacity, loadFactor);
    }

    /** * Constructs a new, empty set; the backing <tt>HashMap</tt> instance has * the specified initial capacity and default load factor (0.75). * * @param initialCapacity the initial capacity of the hash table * @throws IllegalArgumentException if the initial capacity is less * than zero */
    public HashSet(int initialCapacity) {
        map = new HashMap<>(initialCapacity);
    }

    /** * Constructs a new, empty linked hash set. (This package private * constructor is only used by LinkedHashSet.) The backing * HashMap instance is a LinkedHashMap with the specified initial * capacity and the specified load factor. * * @param initialCapacity the initial capacity of the hash map * @param loadFactor the load factor of the hash map * @param dummy ignored (distinguishes this * constructor from other int, float constructor.) * @throws IllegalArgumentException if the initial capacity is less * than zero, or if the load factor is nonpositive * 参数dummy 请忽略 */
    HashSet(int initialCapacity, float loadFactor, boolean dummy) {
        map = new LinkedHashMap<>(initialCapacity, loadFactor);
    }

    /** * Returns an iterator over the elements in this set. The elements * are returned in no particular order. * * @return an Iterator over the elements in this set * @see ConcurrentModificationException】 * Set集合的迭代器 * 返回的是map.keySet().iterator(); * 也就是说Set的迭代器只是HashMap对象的key的集合的迭代器 * 只遍历key值。 */
    public Iterator<E> iterator() {
        return map.keySet().iterator();
    }

    /** * Returns the number of elements in this set (its cardinality). * * @return the number of elements in this set (its cardinality) */
    public int size() {
        return map.size();
    }

    /** * Returns <tt>true</tt> if this set contains no elements. * * @return <tt>true</tt> if this set contains no elements */
    public boolean isEmpty() {
        return map.isEmpty();
    }

    /** * Returns <tt>true</tt> if this set contains the specified element. * More formally, returns <tt>true</tt> if and only if this set * contains an element <tt>e</tt> such that * <tt>(o==null&nbsp;?&nbsp;e==null&nbsp;:&nbsp;o.equals(e))</tt>. * * @param o element whose presence in this set is to be tested * @return <tt>true</tt> if this set contains the specified element */
    public boolean contains(Object o) {
        return map.containsKey(o);
    }

    /** * Adds the specified element to this set if it is not already present. * More formally, adds the specified element <tt>e</tt> to this set if * this set contains no element <tt>e2</tt> such that * <tt>(e==null&nbsp;?&nbsp;e2==null&nbsp;:&nbsp;e.equals(e2))</tt>. * If this set already contains the element, the call leaves the set * unchanged and returns <tt>false</tt>. * * @param e element to be added to this set * @return <tt>true</tt> if this set did not already contain the specified * element * 返回值boolean值。 * 如果集合中已有key值,则返回false 说明添加失败 * 如果集合中还没有对应的key值,则返回true。说明添加成功。 * * 这里用到了HashMap类的put方法。 * 测试发现,可以向HashMap中存放null-null这样的key-value对。 * 并且第二次存放的key=null的key-value对,会覆盖前一次存放的key=null * 的key-value对。所以Set中也可以存放key=null的值,只不过,不能覆盖。 * 第二次添加key=null的值,不会覆盖前一次存放key=null的值。 */
    public boolean add(E e) {
        return map.put(e, PRESENT)==null;
    }

    /** * Removes the specified element from this set if it is present. * More formally, removes an element <tt>e</tt> such that * <tt>(o==null&nbsp;?&nbsp;e==null&nbsp;:&nbsp;o.equals(e))</tt>, * if this set contains such an element. Returns <tt>true</tt> if * this set contained the element (or equivalently, if this set * changed as a result of the call). (This set will not contain the * element once the call returns.) * * @param o object to be removed from this set, if present * @return <tt>true</tt> if the set contained the specified element */
    public boolean remove(Object o) {
        return map.remove(o)==PRESENT;
    }

    /** * Removes all of the elements from this set. * The set will be empty after this call returns. */
    public void clear() {
        map.clear();
    }

    /** * Returns a shallow copy of this <tt>HashSet</tt> instance: the elements * themselves are not cloned. * * @return a shallow copy of this set * 这里clone的方法与HashMap的clone的方法一样,是浅拷贝。 * 也就是说是值得拷贝-引用拷贝,引用的对象并没有拷贝。 */
    public Object clone() {
        try {
            HashSet<E> newSet = (HashSet<E>) super.clone();
            newSet.map = (HashMap<E, Object>) map.clone();
            return newSet;
        } catch (CloneNotSupportedException e) {
            throw new InternalError();
        }
    }

    /** * Save the state of this <tt>HashSet</tt> instance to a stream (that is, * serialize it). * * @serialData The capacity of the backing <tt>HashMap</tt> instance * (int), and its load factor (float) are emitted, followed by * the size of the set (the number of elements it contains) * (int), followed by all of its elements (each an Object) in * no particular order. * 序列化方法 * 关于序列化的一些说明,请参考我的博文: * http://blog.csdn.net/u010156024/article/details/48345257 */
    private void writeObject(java.io.ObjectOutputStream s)
        throws java.io.IOException {
        // Write out any hidden serialization magic
        s.defaultWriteObject();
        // Write out HashMap capacity and load factor
        s.writeInt(map.capacity());
        s.writeFloat(map.loadFactor());
        // Write out size
        s.writeInt(map.size());
        // Write out all elements in the proper order.
        for (E e : map.keySet())
            s.writeObject(e);
    }

    /** * Reconstitute the <tt>HashSet</tt> instance from a stream (that is, * deserialize it). * 反序列化方法 * 关于序列化的一些说明,请参考我的博文: * http://blog.csdn.net/u010156024/article/details/48345257 */
    private void readObject(java.io.ObjectInputStream s)
        throws java.io.IOException, ClassNotFoundException {
        // Read in any hidden serialization magic
        s.defaultReadObject();
        // Read in HashMap capacity and load factor and create backing HashMap
        int capacity = s.readInt();
        float loadFactor = s.readFloat();
        map = (((HashSet)this) instanceof LinkedHashSet ?
               new LinkedHashMap<E,Object>(capacity, loadFactor) :
               new HashMap<E,Object>(capacity, loadFactor));
        // Read in size
        int size = s.readInt();
        // Read in all elements in the proper order.
        for (int i=0; i<size; i++) {
            E e = (E) s.readObject();
            map.put(e, PRESENT);
        }
    }
}

HashSet总结一下:
1、 HashSet就是利用HashMap实现的,HashMap的key组成了一个Set集合。为什么能够组成Set集合呢?因为HashMap的可以不重复,不排序,满足Set集合的要求。
而HashMap的value部分可能重复,因此可以组成List集合。
2、 从HashSet源码部分可以看出,比较简单,代码量也少,不过它和HashMap是由区别的。添加、删除都简单,但是查询方法却不行,HashSet集合不能查询单个元素,只能通过遍历得到。
3、既然HashMap可以存放null-null这样的key-value对,HashSet是HashMap的封装,当然也是可以的。只不过,HashSet中key=null的情况下,value不为null.当存放多次key=null的情况,和HashMap一样,不会存放多个元素,而还是只有一个key=null的元素。key部分并没有发生覆盖,而是value部分覆盖,不过HashSet中的覆盖感觉不到而已。

以上是个人对于HashMap和HashSet类的理解,如果注释部分有什么问题或不对的地方,欢迎批评指正!【握手~~】

    原文作者:龙吟在天
    原文地址: https://blog.csdn.net/u010156024/article/details/48374865
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞