HashMap原理(三):容量、加载因子、大小的含义

这篇具有很好参考价值的文章主要介绍了HashMap原理(三):容量、加载因子、大小的含义。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

/*

  • Copyright © 1997, 2013, Oracle and/or its affiliates. All rights reserved.
  • DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
  • This code is free software; you can redistribute it and/or modify it
  • under the terms of the GNU General Public License version 2 only, as
  • published by the Free Software Foundation. Oracle designates this
  • particular file as subject to the “Classpath” exception as provided
  • by Oracle in the LICENSE file that accompanied this code.
  • This code is distributed in the hope that it will be useful, but WITHOUT
  • ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
  • FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
  • version 2 for more details (a copy is included in the LICENSE file that
  • accompanied this code).
  • You should have received a copy of the GNU General Public License version
  • 2 along with this work; if not, write to the Free Software Foundation,
  • Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
  • Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
  • or visit www.oracle.com if you need additional information or have any
  • questions.
    */

package java.util;

import java.io.IOException;
import java.io.InvalidObjectException;
import java.io.Serializable;
import java.lang.reflect.ParameterizedType;
import java.lang.reflect.Type;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;

/**

  • Hash table based implementation of the Map interface. This

  • implementation provides all of the optional map operations, and permits

  • null values and the null key. (The HashMap

  • class is roughly equivalent to Hashtable, except that it is

  • unsynchronized and permits nulls.) This class makes no guarantees as to

  • the order of the map; in particular, it does not guarantee that the order

  • will remain constant over time.

  • This implementation provides constant-time performance for the basic

  • operations (get and put), assuming the hash function

  • disperses the elements properly among the buckets. Iteration over

  • collection views requires time proportional to the “capacity” of the

  • HashMap instance (the number of buckets) plus its size (the number

  • of key-value mappings). Thus, it’s very important not to set the initial

  • capacity too high (or the load factor too low) if iteration performance is

  • important.

  • An instance of HashMap has two parameters that affect its

  • performance: initial capacity and load factor. The

  • capacity is the number of buckets in the hash table, and the initial

  • capacity is simply the capacity at the time the hash table is created. The

  • load factor is a measure of how full the hash table is allowed to

  • get before its capacity is automatically increased. When the number of

  • entries in the hash table exceeds the product of the load factor and the

  • current capacity, the hash table is rehashed (that is, internal data

  • structures are rebuilt) so that the hash table has approximately twice the

  • number of buckets.

  • As a general rule, the default load factor (.75) offers a good

  • tradeoff between time and space costs. Higher values decrease the

  • space overhead but increase the lookup cost (reflected in most of

  • the operations of the HashMap class, including

  • get and put). The expected number of entries in

  • the map and its load factor should be taken into account when

  • setting its initial capacity, so as to minimize the number of

  • rehash operations. If the initial capacity is greater than the

  • maximum number of entries divided by the load factor, no rehash

  • operations will ever occur.

  • If many mappings are to be stored in a HashMap

  • instance, creating it with a sufficiently large capacity will allow

  • the mappings to be stored more efficiently than letting it perform

  • automatic rehashing as needed to grow the table. Note that using

  • many keys with the same {@code hashCode()} is a sure way to slow

  • down performance of any hash table. To ameliorate impact, when keys

  • are {@link Comparable}, this class may use comparison order among

  • keys to help break ties.

  • Note that this implementation is not synchronized.

  • If multiple threads access a hash map concurrently, and at least one of

  • the threads modifies the map structurally, it must be

  • synchronized externally. (A structural modification is any operation

  • that adds or deletes one or more mappings; merely changing the value

  • associated with a key that an instance already contains is not a

  • structural modification.) This is typically accomplished by

  • synchronizing on some object that naturally encapsulates the map.

  • If no such object exists, the map should be “wrapped” using the

  • {@link Collections#synchronizedMap Collections.synchronizedMap}

  • method. This is best done at creation time, to prevent accidental

  • unsynchronized access to the map:

    
     
  • Map m = Collections.synchronizedMap(new HashMap(…));

  • The iterators returned by all of this class's "collection view methods"

  • are fail-fast: if the map is structurally modified at any time after

  • the iterator is created, in any way except through the iterator’s own

  • remove method, the iterator will throw a

  • {@link ConcurrentModificationException}. Thus, in the face of concurrent

  • modification, the iterator fails quickly and cleanly, rather than risking

  • arbitrary, non-deterministic behavior at an undetermined time in the

  • future.

  • Note that the fail-fast behavior of an iterator cannot be guaranteed

  • as it is, generally speaking, impossible to make any hard guarantees in the

  • presence of unsynchronized concurrent modification. Fail-fast iterators

  • throw ConcurrentModificationException on a best-effort basis.

  • Therefore, it would be wrong to write a program that depended on this

  • exception for its correctness: the fail-fast behavior of iterators

  • should be used only to detect bugs.

  • This class is a member of the

  • Java Collections Framework.

  • @param the type of keys maintained by this map

  • @param the type of mapped values

  • @author Doug Lea

  • @author Josh Bloch

  • @author Arthur van Hoff

  • @author Neal Gafter

  • @see Object#hashCode()

  • @see Collection

  • @see Map

  • @see TreeMap

  • @see Hashtable

  • @since 1.2
    */
    public class HashMap<K,V> extends AbstractMap<K,V>
    implements Map<K,V>, Cloneable, Serializable {

    private static final long serialVersionUID = 362498820763181265L;

    /*

    • Implementation notes.
    • This map usually acts as a binned (bucketed) hash table, but
    • when bins get too large, they are transformed into bins of
    • TreeNodes, each structured similarly to those in
    • java.util.TreeMap. Most methods try to use normal bins, but
    • relay to TreeNode methods when applicable (simply by checking
    • instanceof a node). Bins of TreeNodes may be traversed and
    • used like any others, but additionally support faster lookup
    • when overpopulated. However, since the vast majority of bins in
    • normal use are not overpopulated, checking for existence of
    • tree bins may be delayed in the course of table methods.
    • Tree bins (i.e., bins whose elements are all TreeNodes) are
    • ordered primarily by hashCode, but in the case of ties, if two
    • elements are of the same “class C implements Comparable”,
    • type then their compareTo method is used for ordering. (We
    • conservatively check generic types via reflection to validate
    • this – see method comparableClassFor). The added complexity
    • of tree bins is worthwhile in providing worst-case O(log n)
    • operations when keys either have distinct hashes or are
    • orderable, Thus, performance degrades gracefully under
    • accidental or malicious usages in which hashCode() methods
    • return values that are poorly distributed, as well as those in
    • which many keys share a hashCode, so long as they are also
    • Comparable. (If neither of these apply, we may waste about a
    • factor of two in time and space compared to taking no
    • precautions. But the only known cases stem from poor user
    • programming practices that are already so slow that this makes
    • little difference.)
    • Because TreeNodes are about twice the size of regular nodes, we
    • use them only when bins contain enough nodes to warrant use
    • (see TREEIFY_THRESHOLD). And when they become too small (due to
    • removal or resizing) they are converted back to plain bins. In
    • usages with well-distributed user hashCodes, tree bins are
    • rarely used. Ideally, under random hashCodes, the frequency of
    • nodes in bins follows a Poisson distribution
    • (http://en.wikipedia.org/wiki/Poisson_distribution) with a
    • parameter of about 0.5 on average for the default resizing
    • threshold of 0.75, although with a large variance because of
    • resizing granularity. Ignoring variance, the expected
    • occurrences of list size k are (exp(-0.5) * pow(0.5, k) /
    • factorial(k)). The first values are:
    • 0: 0.60653066
    • 1: 0.30326533
    • 2: 0.07581633
    • 3: 0.01263606
    • 4: 0.00157952
    • 5: 0.00015795
    • 6: 0.00001316
    • 7: 0.00000094
    • 8: 0.00000006
    • more: less than 1 in ten million
    • The root of a tree bin is normally its first node. However,
    • sometimes (currently only upon Iterator.remove), the root might
    • be elsewhere, but can be recovered following parent links
    • (method TreeNode.root()).
    • All applicable internal methods accept a hash code as an
    • argument (as normally supplied from a public method), allowing
    • them to call each other without recomputing user hashCodes.
    • Most internal methods also accept a “tab” argument, that is
    • normally the current table, but may be a new or old one when
    • resizing or converting.
    • When bin lists are treeified, split, or untreeified, we keep
    • them in the same relative access/traversal order (i.e., field
    • Node.next) to better preserve locality, and to slightly
    • simplify handling of splits and traversals that invoke
    • iterator.remove. When using comparators on insertion, to keep a
    • total ordering (or as close as is required here) across
    • rebalancings, we compare classes and identityHashCodes as
    • tie-breakers.
    • The use and transitions among plain vs tree modes is
    • complicated by the existence of subclass LinkedHashMap. See
    • below for hook methods defined to be invoked upon insertion,
    • removal and access that allow LinkedHashMap internals to
    • otherwise remain independent of these mechanics. (This also
    • requires that a map instance be passed to some utility methods
    • that may create new nodes.)
    • The concurrent-programming-like SSA-based coding style helps
    • avoid aliasing errors amid all of the twisty pointer operations.
      */

    /**

    • The default initial capacity - MUST be a power of two.
      */
      static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16

    /**

    • The maximum capacity, used if a higher value is implicitly specified
    • by either of the constructors with arguments.
    • MUST be a power of two <= 1<<30.
      */
      static final int MAXIMUM_CAPACITY = 1 << 30;

    /**

    • The load factor used when none specified in constructor.
      */
      static final float DEFAULT_LOAD_FACTOR = 0.75f;

    /**

    • The bin count threshold for using a tree rather than list for a
    • bin. Bins are converted to trees when adding an element to a
    • bin with at least this many nodes. The value must be greater
    • than 2 and should be at least 8 to mesh with assumptions in
    • tree removal about conversion back to plain bins upon
    • shrinkage.
      */
      static final int TREEIFY_THRESHOLD = 8;

    /**

    • The bin count threshold for untreeifying a (split) bin during a
    • resize operation. Should be less than TREEIFY_THRESHOLD, and at
    • most 6 to mesh with shrinkage detection under removal.
      */
      static final int UNTREEIFY_THRESHOLD = 6;

    /**

    • The smallest table capacity for which bins may be treeified.
    • (Otherwise the table is resized if too many nodes in a bin.)
    • Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts
    • between resizing and treeification thresholds.
      */
      static final int MIN_TREEIFY_CAPACITY = 64;

    /**

    • Basic hash bin node, used for most entries. (See below for

    • TreeNode subclass, and in LinkedHashMap for its Entry subclass.)
      */
      static class Node<K,V> implements Map.Entry<K,V> {
      final int hash;
      final K key;
      V value;
      Node<K,V> next;

      Node(int hash, K key, V value, Node<K,V> next) {
      this.hash = hash;
      this.key = key;
      this.value = value;
      this.next = next;
      }

      public final K getKey() { return key; }
      public final V getValue() { return value; }
      public final String toString() { return key + “=” + value; }

      public final int hashCode() {
      return Objects.hashCode(key) ^ Objects.hashCode(value);
      }

      public final V setValue(V newValue) {
      V oldValue = value;
      value = newValue;
      return oldValue;
      }

      public final boolean equals(Object o) {
      if (o == this)
      return true;
      if (o instanceof Map.Entry) {
      Map.Entry<?,?> e = (Map.Entry<?,?>)o;
      if (Objects.equals(key, e.getKey()) &&
      Objects.equals(value, e.getValue()))
      return true;
      }
      return false;
      }
      }

    /* ---------------- Static utilities -------------- */

    /**

    • Computes key.hashCode() and spreads (XORs) higher bits of hash
    • to lower. Because the table uses power-of-two masking, sets of
    • hashes that vary only in bits above the current mask will
    • always collide. (Among known examples are sets of Float keys
    • holding consecutive whole numbers in small tables.) So we
    • apply a transform that spreads the impact of higher bits
    • downward. There is a tradeoff between speed, utility, and
    • quality of bit-spreading. Because many common sets of hashes
    • are already reasonably distributed (so don’t benefit from
    • spreading), and because we use trees to handle large sets of
    • collisions in bins, we just XOR some shifted bits in the
    • cheapest possible way to reduce systematic lossage, as well as
    • to incorporate impact of the highest bits that would otherwise
    • never be used in index calculations because of table bounds.
      */
      static final int hash(Object key) {
      int h;
      return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
      }

    /**

    • Returns x’s Class if it is of the form "class C implements
    • Comparable", else null.
      */
      static Class<?> comparableClassFor(Object x) { if (x instanceof Comparable) { Class<?> c; Type[] ts, as; Type t; ParameterizedType p;
      if ((c = x.getClass()) == String.class) // bypass checks
      return c;
      if ((ts = c.getGenericInterfaces()) != null) {
      for (int i = 0; i < ts.length; ++i) {
      if (((t = ts[i]) instanceof ParameterizedType) &&
      ((p = (ParameterizedType)t).getRawType() ==
      Comparable.class) &&
      (as = p.getActualTypeArguments()) != null &&
      as.length == 1 && as[0] == c) // type arg is c
      return c;
      }
      }
      }
      return null;
      }

    /**

    • Returns k.compareTo(x) if x matches kc (k’s screened comparable
    • class), else 0.
      */
      @SuppressWarnings({“rawtypes”,“unchecked”}) // for cast to Comparable
      static int compareComparables(Class<?> kc, Object k, Object x) {
      return (x == null || x.getClass() != kc ? 0 :
      ((Comparable)k).compareTo(x));
      }

    /**

    • Returns a power of two size for the given target capacity.
      */
      static final int tableSizeFor(int cap) {
      int n = cap - 1;
      n |= n >>> 1;
      n |= n >>> 2;
      n |= n >>> 4;
      n |= n >>> 8;
      n |= n >>> 16;
      return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
      }

    /* ---------------- Fields -------------- */

    /**

    • The table, initialized on first use, and resized as
    • necessary. When allocated, length is always a power of two.
    • (We also tolerate length zero in some operations to allow
    • bootstrapping mechanics that are currently not needed.)
      */
      transient Node<K,V>[] table;

    /**

    • Holds cached entrySet(). Note that AbstractMap fields are used
    • for keySet() and values().
      */
      transient Set<Map.Entry<K,V>> entrySet;

    /**

    • The number of key-value mappings contained in this map.
      */
      transient int size;

    /**

    • The number of times this HashMap has been structurally modified
    • Structural modifications are those that change the number of mappings in
    • the HashMap or otherwise modify its internal structure (e.g.,
    • rehash). This field is used to make iterators on Collection-views of
    • the HashMap fail-fast. (See ConcurrentModificationException).
      */
      transient int modCount;

    /**

    • The next size value at which to resize (capacity * load factor).
    • @serial
      */
      // (The javadoc description is true upon serialization.
      // Additionally, if the table array has not been allocated, this
      // field holds the initial array capacity, or zero signifying
      // DEFAULT_INITIAL_CAPACITY.)
      int threshold;

    /**

    • The load factor for the hash table.
    • @serial
      */
      final float loadFactor;

    /* ---------------- Public operations -------------- */

    /**

    • Constructs an empty HashMap with the specified initial
    • capacity and load factor.
    • @param initialCapacity the initial capacity
    • @param loadFactor the load factor
    • @throws IllegalArgumentException if the initial capacity is negative
    •     or the load factor is nonpositive
      

    */
    public HashMap(int initialCapacity, float loadFactor) {
    if (initialCapacity < 0)
    throw new IllegalArgumentException("Illegal initial capacity: " +
    initialCapacity);
    if (initialCapacity > MAXIMUM_CAPACITY)
    initialCapacity = MAXIMUM_CAPACITY;
    if (loadFactor <= 0 || Float.isNaN(loadFactor))
    throw new IllegalArgumentException("Illegal load factor: " +
    loadFactor);
    this.loadFactor = loadFactor;
    this.threshold = tableSizeFor(initialCapacity);
    }

    /**

    • Constructs an empty HashMap with the specified initial
    • capacity and the default load factor (0.75).
    • @param initialCapacity the initial capacity.
    • @throws IllegalArgumentException if the initial capacity is negative.
      */
      public HashMap(int initialCapacity) {
      this(initialCapacity, DEFAULT_LOAD_FACTOR);
      }

    /**

    • Constructs an empty HashMap with the default initial capacity
    • (16) and the default load factor (0.75).
      */
      public HashMap() {
      this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
      }

    /**

    • Constructs a new HashMap with the same mappings as the
    • specified Map. The HashMap is created with
    • default load factor (0.75) and an initial capacity sufficient to
    • hold the mappings in the specified Map.
    • @param m the map whose mappings are to be placed in this map
    • @throws NullPointerException if the specified map is null
      */
      public HashMap(Map<? extends K, ? extends V> m) {
      this.loadFactor = DEFAULT_LOAD_FACTOR;
      putMapEntries(m, false);
      }

    /**

    • Implements Map.putAll and Map constructor
    • @param m the map
    • @param evict false when initially constructing this map, else
    • true (relayed to method afterNodeInsertion).
      */
      final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {
      int s = m.size();
      if (s > 0) {
      if (table == null) { // pre-size
      float ft = ((float)s / loadFactor) + 1.0F;
      int t = ((ft < (float)MAXIMUM_CAPACITY) ?
      (int)ft : MAXIMUM_CAPACITY);
      if (t > threshold)
      threshold = tableSizeFor(t);
      }
      else if (s > threshold)
      resize();
      for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {
      K key = e.getKey();
      V value = e.getValue();
      putVal(hash(key), key, value, false, evict);
      }
      }
      }

    /**

    • Returns the number of key-value mappings in this map.
    • @return the number of key-value mappings in this map
      */
      public int size() {
      return size;
      }

    /**

    • Returns true if this map contains no key-value mappings.
    • @return true if this map contains no key-value mappings
      */
      public boolean isEmpty() {
      return size == 0;
      }

    /**

    • Returns the value to which the specified key is mapped,
    • or {@code null} if this map contains no mapping for the key.
    • More formally, if this map contains a mapping from a key

    • {@code k} to a value {@code v} such that {@code (keynull ? knull :
    • key.equals(k))}, then this method returns {@code v}; otherwise
    • it returns {@code null}. (There can be at most one such mapping.)
    • A return value of {@code null} does not necessarily

    • indicate that the map contains no mapping for the key; it’s also
    • possible that the map explicitly maps the key to {@code null}.
    • The {@link #containsKey containsKey} operation may be used to
    • distinguish these two cases.
    • @see #put(Object, Object)
      */
      public V get(Object key) {
      Node<K,V> e;
      return (e = getNode(hash(key), key)) == null ? null : e.value;
      }

    /**

    • Implements Map.get and related methods
    • @param hash hash for key
    • @param key the key
    • @return the node, or null if none
      */
      final Node<K,V> getNode(int hash, Object key) {
      Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
      if ((tab = table) != null && (n = tab.length) > 0 &&
      (first = tab[(n - 1) & hash]) != null) {
      if (first.hash == hash && // always check first node
      ((k = first.key) == key || (key != null && key.equals(k))))
      return first;
      if ((e = first.next) != null) {
      if (first instanceof TreeNode)
      return ((TreeNode<K,V>)first).getTreeNode(hash, key);
      do {
      if (e.hash == hash &&
      ((k = e.key) == key || (key != null && key.equals(k))))
      return e;
      } while ((e = e.next) != null);
      }
      }
      return null;
      }

    /**

    • Returns true if this map contains a mapping for the
    • specified key.
    • @param key The key whose presence in this map is to be tested
    • @return true if this map contains a mapping for the specified
    • key.
      */
      public boolean containsKey(Object key) {
      return getNode(hash(key), key) != null;
      }

    /**

    • Associates the specified value with the specified key in this map.
    • If the map previously contained a mapping for the key, the old
    • value is replaced.
    • @param key key with which the specified value is to be associated
    • @param value value to be associated with the specified key
    • @return the previous value associated with key, or
    •     <tt>null</tt> if there was no mapping for <tt>key</tt>.
      
    •     (A <tt>null</tt> return can also indicate that the map
      
    •     previously associated <tt>null</tt> with <tt>key</tt>.)
      

    */
    public V put(K key, V value) {
    return putVal(hash(key), key, value, false, true);
    }

    /**

    • Implements Map.put and related methods
    • @param hash hash for key
    • @param key the key
    • @param value the value to put
    • @param onlyIfAbsent if true, don’t change existing value
    • @param evict if false, the table is in creation mode.
    • @return previous value, or null if none
      */
      final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
      boolean evict) {
      Node<K,V>[] tab; Node<K,V> p; int n, i;
      if ((tab = table) == null || (n = tab.length) == 0)
      n = (tab = resize()).length;
      if ((p = tab[i = (n - 1) & hash]) == null)
      tab[i] = newNode(hash, key, value, null);
      else {
      Node<K,V> e; K k;
      if (p.hash == hash &&
      ((k = p.key) == key || (key != null && key.equals(k))))
      e = p;
      else if (p instanceof TreeNode)
      e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
      else {
      for (int binCount = 0; ; ++binCount) {
      if ((e = p.next) == null) {
      p.next = newNode(hash, key, value, null);
      if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
      treeifyBin(tab, hash);
      break;
      }
      if (e.hash == hash &&
      ((k = e.key) == key || (key != null && key.equals(k))))
      break;
      p = e;
      }
      }
      if (e != null) { // existing mapping for key
      V oldValue = e.value;
      if (!onlyIfAbsent || oldValue == null)
      e.value = value;
      afterNodeAccess(e);
      return oldValue;
      }
      }
      ++modCount;
      if (++size > threshold)
      resize();
      afterNodeInsertion(evict);
      return null;
      }

    /**

    • Initializes or doubles table size. If null, allocates in
    • accord with initial capacity target held in field threshold.
    • Otherwise, because we are using power-of-two expansion, the
    • elements from each bin must either stay at same index, or move
    • with a power of two offset in the new table.
    • @return the table
      */
      final Node<K,V>[] resize() {
      Node<K,V>[] oldTab = table;
      int oldCap = (oldTab == null) ? 0 : oldTab.length;
      int oldThr = threshold;
      int newCap, newThr = 0;
      if (oldCap > 0) {
      if (oldCap >= MAXIMUM_CAPACITY) {
      threshold = Integer.MAX_VALUE;
      return oldTab;
      }
      else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
      oldCap >= DEFAULT_INITIAL_CAPACITY)
      newThr = oldThr << 1; // double threshold
      }
      else if (oldThr > 0) // initial capacity was placed in threshold
      newCap = oldThr;
      else { // zero initial threshold signifies using defaults
      newCap = DEFAULT_INITIAL_CAPACITY;
      newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
      }
      if (newThr == 0) {
      float ft = (float)newCap * loadFactor;
      newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
      (int)ft : Integer.MAX_VALUE);
      }
      threshold = newThr;
      @SuppressWarnings({“rawtypes”,“unchecked”})
      Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
      table = newTab;
      if (oldTab != null) {
      for (int j = 0; j < oldCap; ++j) {
      Node<K,V> e;
      if ((e = oldTab[j]) != null) {
      oldTab[j] = null;
      if (e.next == null)
      newTab[e.hash & (newCap - 1)] = e;
      else if (e instanceof TreeNode)
      ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
      else { // preserve order
      Node<K,V> loHead = null, loTail = null;
      Node<K,V> hiHead = null, hiTail = null;
      Node<K,V> next;
      do {
      next = e.next;
      if ((e.hash & oldCap) == 0) {
      if (loTail == null)
      loHead = e;
      else
      loTail.next = e;
      loTail = e;
      }
      else {
      if (hiTail == null)
      hiHead = e;
      else
      hiTail.next = e;
      hiTail = e;
      }
      } while ((e = next) != null);
      if (loTail != null) {
      loTail.next = null;
      newTab[j] = loHead;
      }
      if (hiTail != null) {
      hiTail.next = null;
      newTab[j + oldCap] = hiHead;
      }
      }
      }
      }
      }
      return newTab;
      }

    /**

    • Replaces all linked nodes in bin at index for given hash unless
    • table is too small, in which case resizes instead.
      */
      final void treeifyBin(Node<K,V>[] tab, int hash) {
      int n, index; Node<K,V> e;
      if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
      resize();
      else if ((e = tab[index = (n - 1) & hash]) != null) {
      TreeNode<K,V> hd = null, tl = null;
      do {
      TreeNode<K,V> p = replacementTreeNode(e, null);
      if (tl == null)
      hd = p;
      else {
      p.prev = tl;
      tl.next = p;
      }
      tl = p;
      } while ((e = e.next) != null);
      if ((tab[index] = hd) != null)
      hd.treeify(tab);
      }
      }

    /**

    • Copies all of the mappings from the specified map to this map.
    • These mappings will replace any mappings that this map had for
    • any of the keys currently in the specified map.
    • @param m mappings to be stored in this map
    • @throws NullPointerException if the specified map is null
      */
      public void putAll(Map<? extends K, ? extends V> m) {
      putMapEntries(m, true);
      }

    /**

    • Removes the mapping for the specified key from this map if present.
    • @param key key whose mapping is to be removed from the map
    • @return the previous value associated with key, or
    •     <tt>null</tt> if there was no mapping for <tt>key</tt>.
      
    •     (A <tt>null</tt> return can also indicate that the map
      
    •     previously associated <tt>null</tt> with <tt>key</tt>.)
      

    */
    public V remove(Object key) {
    Node<K,V> e;
    return (e = removeNode(hash(key), key, null, false, true)) == null ?
    null : e.value;
    }

    /**

    • Implements Map.remove and related methods
    • @param hash hash for key
    • @param key the key
    • @param value the value to match if matchValue, else ignored
    • @param matchValue if true only remove if value is equal
    • @param movable if false do not move other nodes while removing
    • @return the node, or null if none
      */
      final Node<K,V> removeNode(int hash, Object key, Object value,
      boolean matchValue, boolean movable) {
      Node<K,V>[] tab; Node<K,V> p; int n, index;
      if ((tab = table) != null && (n = tab.length) > 0 &&
      (p = tab[index = (n - 1) & hash]) != null) {
      Node<K,V> node = null, e; K k; V v;
      if (p.hash == hash &&
      ((k = p.key) == key || (key != null && key.equals(k))))
      node = p;
      else if ((e = p.next) != null) {
      if (p instanceof TreeNode)
      node = ((TreeNode<K,V>)p).getTreeNode(hash, key);
      else {
      do {
      if (e.hash == hash &&
      ((k = e.key) == key ||
      (key != null && key.equals(k)))) {
      node = e;
      break;
      }
      p = e;
      } while ((e = e.next) != null);
      }
      }
      if (node != null && (!matchValue || (v = node.value) == value ||
      (value != null && value.equals(v)))) {
      if (node instanceof TreeNode)
      ((TreeNode<K,V>)node).removeTreeNode(this, tab, movable);
      else if (node == p)
      tab[index] = node.next;
      else
      p.next = node.next;
      ++modCount;
      –size;
      afterNodeRemoval(node);
      return node;
      }
      }
      return null;
      }

    /**

    • Removes all of the mappings from this map.
    • The map will be empty after this call returns.
      */
      public void clear() {
      Node<K,V>[] tab;
      modCount++;
      if ((tab = table) != null && size > 0) {
      size = 0;
      for (int i = 0; i < tab.length; ++i)
      tab[i] = null;
      }
      }

    /**

    • Returns true if this map maps one or more keys to the
    • specified value.
    • @param value value whose presence in this map is to be tested
    • @return true if this map maps one or more keys to the
    •     specified value
      

    */
    public boolean containsValue(Object value) {
    Node<K,V>[] tab; V v;
    if ((tab = table) != null && size > 0) {
    for (int i = 0; i < tab.length; ++i) {
    for (Node<K,V> e = tab[i]; e != null; e = e.next) {
    if ((v = e.value) == value ||
    (value != null && value.equals(v)))
    return true;
    }
    }
    }
    return false;
    }

    /**

    • Returns a {@link Set} view of the keys contained in this map.
    • The set is backed by the map, so changes to the map are
    • reflected in the set, and vice-versa. If the map is modified
    • while an iteration over the set is in progress (except through
    • the iterator’s own remove operation), the results of
    • the iteration are undefined. The set supports element removal,
    • which removes the corresponding mapping from the map, via the
    • Iterator.remove, Set.remove,
    • removeAll, retainAll, and clear
    • operations. It does not support the add or addAll
    • operations.
    • @return a set view of the keys contained in this map
      */
      public Set keySet() {
      Set ks = keySet;
      if (ks == null) {
      ks = new KeySet();
      keySet = ks;
      }
      return ks;
      }

    final class KeySet extends AbstractSet {
    public final int size() { return size; }
    public final void clear() { HashMap.this.clear(); }
    public final Iterator iterator() { return new KeyIterator(); }
    public final boolean contains(Object o) { return containsKey(o); }
    public final boolean remove(Object key) {
    return removeNode(hash(key), key, null, false, true) != null;
    }
    public final Spliterator spliterator() {
    return new KeySpliterator<>(HashMap.this, 0, -1, 0, 0);
    }
    public final void forEach(Consumer<? super K> action) {
    Node<K,V>[] tab;
    if (action == null)
    throw new NullPointerException();
    if (size > 0 && (tab = table) != null) {
    int mc = modCount;
    // Android-changed: Detect changes to modCount early.
    for (int i = 0; (i < tab.length && modCount == mc); ++i) {
    for (Node<K,V> e = tab[i]; e != null; e = e.next)
    action.accept(e.key);
    }
    if (modCount != mc)
    throw new ConcurrentModificationException();
    }
    }
    }

    /**

    • Returns a {@link Collection} view of the values contained in this map.
    • The collection is backed by the map, so changes to the map are
    • reflected in the collection, and vice-versa. If the map is
    • modified while an iteration over the collection is in progress
    • (except through the iterator’s own remove operation),
    • the results of the iteration are undefined. The collection
    • supports element removal, which removes the corresponding
    • mapping from the map, via the Iterator.remove,
    • Collection.remove, removeAll,
    • retainAll and clear operations. It does not
    • support the add or addAll operations.
    • @return a view of the values contained in this map
      */
      public Collection values() {
      Collection vs = values;
      if (vs == null) {
      vs = new Values();
      values = vs;
      }
      return vs;
      }

    final class Values extends AbstractCollection {
    public final int size() { return size; }
    public final void clear() { HashMap.this.clear(); }
    public final Iterator iterator() { return new ValueIterator(); }
    public final boolean contains(Object o) { return containsValue(o); }
    public final Spliterator spliterator() {
    return new ValueSpliterator<>(HashMap.this, 0, -1, 0, 0);
    }
    public final void forEach(Consumer<? super V> action) {
    Node<K,V>[] tab;
    if (action == null)
    throw new NullPointerException();
    if (size > 0 && (tab = table) != null) {
    int mc = modCount;
    // Android-changed: Detect changes to modCount early.
    for (int i = 0; (i < tab.length && modCount == mc); ++i) {
    for (Node<K,V> e = tab[i]; e != null; e = e.next)
    action.accept(e.value);
    }
    if (modCount != mc)
    throw new ConcurrentModificationException();
    }
    }
    }

    /**

    • Returns a {@link Set} view of the mappings contained in this map.
    • The set is backed by the map, so changes to the map are
    • reflected in the set, and vice-versa. If the map is modified
    • while an iteration over the set is in progress (except through
    • the iterator’s own remove operation, or through the
    • setValue operation on a map entry returned by the
    • iterator) the results of the iteration are undefined. The set
    • supports element removal, which removes the corresponding
    • mapping from the map, via the Iterator.remove,
    • Set.remove, removeAll, retainAll and
    • clear operations. It does not support the
    • add or addAll operations.
    • @return a set view of the mappings contained in this map
      */
      public Set<Map.Entry<K,V>> entrySet() {
      Set<Map.Entry<K,V>> es;
      return (es = entrySet) == null ? (entrySet = new EntrySet()) : es;
      }

    final class EntrySet extends AbstractSet<Map.Entry<K,V>> {
    public final int size() { return size; }
    public final void clear() { HashMap.this.clear(); }
    public final Iterator<Map.Entry<K,V>> iterator() {
    return new EntryIterator();
    }
    public final boolean contains(Object o) {
    if (!(o instanceof Map.Entry))
    return false;
    Map.Entry<?,?> e = (Map.Entry<?,?>) o;
    Object key = e.getKey();
    Node<K,V> candidate = getNode(hash(key), key);
    return candidate != null && candidate.equals(e);
    }
    public final boolean remove(Object o) {
    if (o instanceof Map.Entry) {
    Map.Entry<?,?> e = (Map.Entry<?,?>) o;
    Object key = e.getKey();
    Object value = e.getValue();
    return removeNode(hash(key), key, value, true, true) != null;
    }
    return false;
    }
    public final Spliterator<Map.Entry<K,V>> spliterator() {
    return new EntrySpliterator<>(HashMap.this, 0, -1, 0, 0);
    }
    public final void forEach(Consumer<? super Map.Entry<K,V>> action) {
    Node<K,V>[] tab;
    if (action == null)
    throw new NullPointerException();
    if (size > 0 && (tab = table) != null) {
    int mc = modCount;
    // Android-changed: Detect changes to modCount early.
    for (int i = 0; (i < tab.length && modCount == mc); ++i) {
    for (Node<K,V> e = tab[i]; e != null; e = e.next)
    action.accept(e);
    }
    if (modCount != mc)
    throw new ConcurrentModificationException();
    }
    }
    }

    // Overrides of JDK8 Map extension methods

    @Override
    public V getOrDefault(Object key, V defaultValue) {
    Node<K,V> e;
    return (e = getNode(hash(key), key)) == null ? defaultValue : e.value;
    }

    @Override
    public V putIfAbsent(K key, V value) {
    return putVal(hash(key), key, value, true, true);
    }

    @Override
    public boolean remove(Object key, Object value) {
    return removeNode(hash(key), key, value, true, true) != null;
    }

    @Override
    public boolean replace(K key, V oldValue, V newValue) {
    Node<K,V> e; V v;
    if ((e = getNode(hash(key), key)) != null &&
    ((v = e.value) == oldValue || (v != null && v.equals(oldValue)))) {
    e.value = newValue;
    afterNodeAccess(e);
    return true;
    }
    return false;
    }

    @Override
    public V replace(K key, V value) {
    Node<K,V> e;
    if ((e = getNode(hash(key), key)) != null) {
    V oldValue = e.value;
    e.value = value;
    afterNodeAccess(e);
    return oldValue;
    }
    return null;
    }

    @Override
    public V computeIfAbsent(K key,
    Function<? super K, ? extends V> mappingFunction) {
    if (mappingFunction == null)
    throw new NullPointerException();
    int hash = hash(key);
    Node<K,V>[] tab; Node<K,V> first; int n, i;
    int binCount = 0;
    TreeNode<K,V> t = null;
    Node<K,V> old = null;
    if (size > threshold || (tab = table) == null ||
    (n = tab.length) == 0)
    n = (tab = resize()).length;
    if ((first = tab[i = (n - 1) & hash]) != null) {
    if (first instanceof TreeNode)
    old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
    else {
    Node<K,V> e = first; K k;
    do {
    if (e.hash == hash &&
    ((k = e.key) == key || (key != null && key.equals(k)))) {
    old = e;
    break;
    }
    ++binCount;
    } while ((e = e.next) != null);
    }
    V oldValue;
    if (old != null && (oldValue = old.value) != null) {
    afterNodeAccess(old);
    return oldValue;
    }
    }
    V v = mappingFunction.apply(key);
    if (v == null) {
    return null;
    } else if (old != null) {
    old.value = v;
    afterNodeAccess(old);
    return v;
    }
    else if (t != null)
    t.putTreeVal(this, tab, hash, key, v);
    else {
    tab[i] = newNode(hash, key, v, first);
    if (binCount >= TREEIFY_THRESHOLD - 1)
    treeifyBin(tab, hash);
    }
    ++modCount;
    ++size;
    afterNodeInsertion(true);
    return v;
    }

    public V computeIfPresent(K key,
    BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
    if (remappingFunction == null)
    throw new NullPointerException();
    Node<K,V> e; V oldValue;
    int hash = hash(key);
    if ((e = getNode(hash, key)) != null &&
    (oldValue = e.value) != null) {
    V v = remappingFunction.apply(key, oldValue);
    if (v != null) {
    e.value = v;
    afterNodeAccess(e);
    return v;
    }
    else
    removeNode(hash, key, null, false, true);
    }
    return null;
    }

    @Override
    public V compute(K key,
    BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
    if (remappingFunction == null)
    throw new NullPointerException();
    int hash = hash(key);
    Node<K,V>[] tab; Node<K,V> first; int n, i;
    int binCount = 0;
    TreeNode<K,V> t = null;
    Node<K,V> old = null;
    if (size > threshold || (tab = table) == null ||
    (n = tab.length) == 0)
    n = (tab = resize()).length;
    if ((first = tab[i = (n - 1) & hash]) != null) {
    if (first instanceof TreeNode)
    old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
    else {
    Node<K,V> e = first; K k;
    do {
    if (e.hash == hash &&
    ((k = e.key) == key || (key != null && key.equals(k)))) {
    old = e;
    break;
    }
    ++binCount;
    } while ((e = e.next) != null);
    }
    }
    V oldValue = (old == null) ? null : old.value;
    V v = remappingFunction.apply(key, oldValue);
    if (old != null) {
    if (v != null) {
    old.value = v;
    afterNodeAccess(old);
    }
    else
    removeNode(hash, key, null, false, true);
    }
    else if (v != null) {
    if (t != null)
    t.putTreeVal(this, tab, hash, key, v);
    else {
    tab[i] = newNode(hash, key, v, first);
    if (binCount >= TREEIFY_THRESHOLD - 1)
    treeifyBin(tab, hash);
    }
    ++modCount;
    ++size;
    afterNodeInsertion(true);
    }
    return v;
    }

    @Override
    public V merge(K key, V value,
    BiFunction<? super V, ? super V, ? extends V> remappingFunction) {
    if (value == null)
    throw new NullPointerException();
    if (remappingFunction == null)
    throw new NullPointerException();
    int hash = hash(key);
    Node<K,V>[] tab; Node<K,V> first; int n, i;
    int binCount = 0;
    TreeNode<K,V> t = null;
    Node<K,V> old = null;
    if (size > threshold || (tab = table) == null ||
    (n = tab.length) == 0)
    n = (tab = resize()).length;
    if ((first = tab[i = (n - 1) & hash]) != null) {
    if (first instanceof TreeNode)
    old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
    else {
    Node<K,V> e = first; K k;
    do {
    if (e.hash == hash &&
    ((k = e.key) == key || (key != null && key.equals(k)))) {
    old = e;
    break;
    }
    ++binCount;
    } while ((e = e.next) != null);
    }
    }
    if (old != null) {
    V v;
    if (old.value != null)
    v = remappingFunction.apply(old.value, value);
    else
    v = value;
    if (v != null) {
    old.value = v;
    afterNodeAccess(old);
    }
    else
    removeNode(hash, key, null, false, true);
    return v;
    }
    if (value != null) {
    if (t != null)
    t.putTreeVal(this, tab, hash, key, value);
    else {
    tab[i] = newNode(hash, key, value, first);
    if (binCount >= TREEIFY_THRESHOLD - 1)
    treeifyBin(tab, hash);
    }
    ++modCount;
    ++size;
    afterNodeInsertion(true);
    }
    return value;
    }

    @Override
    public void forEach(BiConsumer<? super K, ? super V> action) {
    Node<K,V>[] tab;
    if (action == null)
    throw new NullPointerException();
    if (size > 0 && (tab = table) != null) {
    int mc = modCount;
    // Android-changed: Detect changes to modCount early.
    for (int i = 0; (i < tab.length && mc == modCount); ++i) {
    for (Node<K,V> e = tab[i]; e != null; e = e.next)
    action.accept(e.key, e.value);
    }
    if (modCount != mc)
    throw new ConcurrentModificationException();
    }
    }

    @Override
    public void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) {
    Node<K,V>[] tab;
    if (function == null)
    throw new NullPointerException();
    if (size > 0 && (tab = table) != null) {
    int mc = modCount;
    for (int i = 0; i < tab.length; ++i) {
    for (Node<K,V> e = tab[i]; e != null; e = e.next) {
    e.value = function.apply(e.key, e.value);
    }
    }
    if (modCount != mc)
    throw new ConcurrentModificationException();
    }
    }

    /* ------------------------------------------------------------ */
    // Cloning and serialization

    /**

    • Returns a shallow copy of this HashMap instance: the keys and
    • values themselves are not cloned.
    • @return a shallow copy of this map
      */
      @SuppressWarnings(“unchecked”)
      @Override
      public Object clone() {
      HashMap<K,V> result;
      try {
      result = (HashMap<K,V>)super.clone();
      } catch (CloneNotSupportedException e) {
      // this shouldn’t happen, since we are Cloneable
      throw new InternalError(e);
      }
      result.reinitialize();
      result.putMapEntries(this, false);
      return result;
      }

    // These methods are also used when serializing HashSets
    final float loadFactor() { return loadFactor; }
    final int capacity() {
    return (table != null) ? table.length :
    (threshold > 0) ? threshold :
    DEFAULT_INITIAL_CAPACITY;
    }

    /**

    • Save the state of the HashMap instance to a stream (i.e.,
    • serialize it).
    • @serialData The capacity of the HashMap (the length of the
    •         bucket array) is emitted (int), followed by the
      
    •         <i>size</i> (an int, the number of key-value
      
    •         mappings), followed by the key (Object) and value (Object)
      
    •         for each key-value mapping.  The key-value mappings are
      
    •         emitted in no particular order.
      

    */
    private void writeObject(java.io.ObjectOutputStream s)
    throws IOException {
    int buckets = capacity();
    // Write out the threshold, loadfactor, and any hidden stuff
    s.defaultWriteObject();
    s.writeInt(buckets);
    s.writeInt(size);
    internalWriteEntries(s);
    }

    /**

    • Reconstitute the {@code HashMap} instance from a stream (i.e.,

    • deserialize it).
      */
      private void readObject(java.io.ObjectInputStream s)
      throws IOException, ClassNotFoundException {
      // Read in the threshold (ignored), loadfactor, and any hidden stuff
      s.defaultReadObject();
      reinitialize();
      if (loadFactor <= 0 || Float.isNaN(loadFactor))
      throw new InvalidObjectException("Illegal load factor: " +
      loadFactor);
      s.readInt(); // Read and ignore number of buckets
      int mappings = s.readInt(); // Read number of mappings (size)
      if (mappings < 0)
      throw new InvalidObjectException("Illegal mappings count: " +
      mappings);
      else if (mappings > 0) { // (if zero, use defaults)
      // Size the table using given load factor only if within
      // range of 0.25…4.0
      float lf = Math.min(Math.max(0.25f, loadFactor), 4.0f);
      float fc = (float)mappings / lf + 1.0f;
      int cap = ((fc < DEFAULT_INITIAL_CAPACITY) ?
      DEFAULT_INITIAL_CAPACITY :
      (fc >= MAXIMUM_CAPACITY) ?
      MAXIMUM_CAPACITY :
      tableSizeFor((int)fc));
      float ft = (float)cap * lf;
      threshold = ((cap < MAXIMUM_CAPACITY && ft < MAXIMUM_CAPACITY) ?
      (int)ft : Integer.MAX_VALUE);
      @SuppressWarnings({“rawtypes”,“unchecked”})
      Node<K,V>[] tab = (Node<K,V>[])new Node[cap];
      table = tab;

       // Read the keys and values, and put the mappings in the HashMap
       for (int i = 0; i < mappings; i++) {
           @SuppressWarnings("unchecked")
               K key = (K) s.readObject();
           @SuppressWarnings("unchecked")
               V value = (V) s.readObject();
           putVal(hash(key), key, value, false, false);
       }
      

      }
      }

    /* ------------------------------------------------------------ */
    // iterators

    abstract class HashIterator {
    Node<K,V> next; // next entry to return
    Node<K,V> current; // current entry
    int expectedModCount; // for fast-fail
    int index; // current slot

     HashIterator() {
         expectedModCount = modCount;
         Node<K,V>[] t = table;
         current = next = null;
         index = 0;
         if (t != null && size > 0) { // advance to first entry
             do {} while (index < t.length && (next = t[index++]) == null);
         }
     }
    
     public final boolean hasNext() {
         return next != null;
     }
    
     final Node<K,V> nextNode() {
         Node<K,V>[] t;
         Node<K,V> e = next;
         if (modCount != expectedModCount)
             throw new ConcurrentModificationException();
         if (e == null)
             throw new NoSuchElementException();
         if ((next = (current = e).next) == null && (t = table) != null) {
             do {} while (index < t.length && (next = t[index++]) == null);
         }
         return e;
     }
    
     public final void remove() {
         Node<K,V> p = current;
         if (p == null)
             throw new IllegalStateException();
         if (modCount != expectedModCount)
             throw new ConcurrentModificationException();
         current = null;
         K key = p.key;
         removeNode(hash(key), key, null, false, false);
         expectedModCount = modCount;
     }
    

    }文章来源地址https://www.toymoban.com/news/detail-466478.html

    final class KeyIterator extends HashIterator
    implements Iterator {
    public final K next() { return nextNode().key; }
    }

    final class ValueIterator extends HashIterator
    implements Iterator {
    public final V next() { return nextNode().value; }
    }

    final class EntryIterator extends HashIterator
    implements Iterator<Map.Entry<K,V>> {
    public final Map.Entry<K,V> next() { return nextNode(); }
    }

    /* ------------------------------------------------------------ */
    // spliterators

    static class HashMapSpliterator<K,V> {
    final HashMap<K,V> map;
    Node<K,V> current; // current node
    int index; // current index, modified on advance/split
    int fence; // one past last index
    int est; // size estimate
    int expectedModCount; // for comodification checks

     HashMapSpliterator(HashMap<K,V> m, int origin,
                        int fence, int est,
                        int expectedModCount) {
         this.map = m;
         this.index = origin;
         this.fence = fence;
         this.est = est;
         this.expectedModCount = expectedModCount;
     }
    
     final int getFence() { // initialize fence and size on first use
         int hi;
         if ((hi = fence) < 0) {
             HashMap<K,V> m = map;
             est = m.size;
             expectedModCount = m.modCount;
             Node<K,V>[] tab = m.table;
             hi = fence = (tab == null) ? 0 : tab.length;
         }
         return hi;
     }
    
     public final long estimateSize() {
         getFence(); // force init
         return (long) est;
     }
    

    }

    static final class KeySpliterator<K,V>
    extends HashMapSpliterator<K,V>
    implements Spliterator {
    KeySpliterator(HashMap<K,V> m, int origin, int fence, int est,
    int expectedModCount) {
    super(m, origin, fence, est, expectedModCount);
    }

     public KeySpliterator<K,V> trySplit() {
         int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
         return (lo >= mid || current != null) ? null :
             new KeySpliterator<>(map, lo, index = mid, est >>>= 1,
                                     expectedModCount);
     }
    
     public void forEachRemaining(Consumer<? super K> action) {
         int i, hi, mc;
         if (action == null)
             throw new NullPointerException();
         HashMap<K,V> m = map;
         Node<K,V>[] tab = m.table;
         if ((hi = fence) < 0) {
             mc = expectedModCount = m.modCount;
             hi = fence = (tab == null) ? 0 : tab.length;
         }
         else
             mc = expectedModCount;
         if (tab != null && tab.length >= hi &&
             (i = index) >= 0 && (i < (index = hi) || current != null)) {
             Node<K,V> p = current;
             current = null;
             do {
                 if (p == null)
                     p = tab[i++];
                 else {
                     action.accept(p.key);
                     p = p.next;
                 }
             } while (p != null || i < hi);
             if (m.modCount != mc)
                 throw new ConcurrentModificationException();
         }
     }
    
     public boolean tryAdvance(Consumer<? super K> action) {
         int hi;
         if (action == null)
             throw new NullPointerException();
         Node<K,V>[] tab = map.table;
         if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
             while (current != null || index < hi) {
                 if (current == null)
                     current = tab[index++];
                 else {
                     K k = current.key;
                     current = current.next;
                     action.accept(k);
                     if (map.modCount != expectedModCount)
                         throw new ConcurrentModificationException();
                     return true;
                 }
             }
         }
         return false;
     }
    
     public int characteristics() {
         return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |
             Spliterator.DISTINCT;
     }
    

    }

    static final class ValueSpliterator<K,V>
    extends HashMapSpliterator<K,V>
    implements Spliterator {
    ValueSpliterator(HashMap<K,V> m, int origin, int fence, int est,
    int expectedModCount) {
    super(m, origin, fence, est, expectedModCount);
    }

     public ValueSpliterator<K,V> trySplit() {
         int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
         return (lo >= mid || current != null) ? null :
             new ValueSpliterator<>(map, lo, index = mid, est >>>= 1,
                                       expectedModCount);
     }
    
     public void forEachRemaining(Consumer<? super V> action) {
         int i, hi, mc;
         if (action == null)
             throw new NullPointerException();
         HashMap<K,V> m = map;
         Node<K,V>[] tab = m.table;
         if ((hi = fence) < 0) {
             mc = expectedModCount = m.modCount;
             hi = fence = (tab == null) ? 0 : tab.length;
         }
         else
             mc = expectedModCount;
         if (tab != null && tab.length >= hi &&
             (i = index) >= 0 && (i < (index = hi) || current != null)) {
             Node<K,V> p = current;
             current = null;
             do {
                 if (p == null)
                     p = tab[i++];
                 else {
                     action.accept(p.value);
                     p = p.next;
                 }
             } while (p != null || i < hi);
             if (m.modCount != mc)
                 throw new ConcurrentModificationException();
         }
     }
    
     public boolean tryAdvance(Consumer<? super V> action) {
         int hi;
         if (action == null)
             throw new NullPointerException();
         Node<K,V>[] tab = map.table;
         if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
             while (current != null || index < hi) {
                 if (current == null)
                     current = tab[index++];
                 else {
                     V v = current.value;
                     current = current.next;
                     action.accept(v);
                     if (map.modCount != expectedModCount)
                         throw new ConcurrentModificationException();
                     return true;
                 }
             }
         }
         return false;
     }
    
     public int characteristics() {
         return (fence < 0 || est == map.size ? Spliterator.SIZED : 0);
     }
    

    }

    static final class EntrySpliterator<K,V>
    extends HashMapSpliterator<K,V>
    implements Spliterator<Map.Entry<K,V>> {
    EntrySpliterator(HashMap<K,V> m, int origin, int fence, int est,
    int expectedModCount) {
    super(m, origin, fence, est, expectedModCount);
    }

     public EntrySpliterator<K,V> trySplit() {
         int hi = getFence(), lo = index, mid = (lo + hi) >>> 1;
         return (lo >= mid || current != null) ? null :
             new EntrySpliterator<>(map, lo, index = mid, est >>>= 1,
                                       expectedModCount);
     }
    
     public void forEachRemaining(Consumer<? super Map.Entry<K,V>> action) {
         int i, hi, mc;
         if (action == null)
             throw new NullPointerException();
         HashMap<K,V> m = map;
         Node<K,V>[] tab = m.table;
         if ((hi = fence) < 0) {
             mc = expectedModCount = m.modCount;
             hi = fence = (tab == null) ? 0 : tab.length;
         }
         else
             mc = expectedModCount;
         if (tab != null && tab.length >= hi &&
             (i = index) >= 0 && (i < (index = hi) || current != null)) {
             Node<K,V> p = current;
             current = null;
             do {
                 if (p == null)
                     p = tab[i++];
                 else {
                     action.accept(p);
                     p = p.next;
                 }
             } while (p != null || i < hi);
             if (m.modCount != mc)
                 throw new ConcurrentModificationException();
         }
     }
    
     public boolean tryAdvance(Consumer<? super Map.Entry<K,V>> action) {
         int hi;
         if (action == null)
             throw new NullPointerException();
         Node<K,V>[] tab = map.table;
         if (tab != null && tab.length >= (hi = getFence()) && index >= 0) {
             while (current != null || index < hi) {
                 if (current == null)
                     current = tab[index++];
                 else {
                     Node<K,V> e = current;
                     current = current.next;
                     action.accept(e);
                     if (map.modCount != expectedModCount)
                         throw new ConcurrentModificationException();
                     return true;
                 }
             }
         }
         return false;
     }
    
     public int characteristics() {
         return (fence < 0 || est == map.size ? Spliterator.SIZED : 0) |
             Spliterator.DISTINCT;
     }
    

    }

    /* ------------------------------------------------------------ */
    // LinkedHashMap support

    /*

    • The following package-protected methods are designed to be
    • overridden by LinkedHashMap, but not by any other subclass.
    • Nearly all other internal methods are also package-protected
    • but are declared final, so can be used by LinkedHashMap, view
    • classes, and HashSet.
      */

    // Create a regular (non-tree) node
    Node<K,V> newNode(int hash, K key, V value, Node<K,V> next) {
    return new Node<>(hash, key, value, next);
    }

    // For conversion from TreeNodes to plain nodes
    Node<K,V> replacementNode(Node<K,V> p, Node<K,V> next) {
    return new Node<>(p.hash, p.key, p.value, next);
    }

    // Create a tree bin node
    TreeNode<K,V> newTreeNode(int hash, K key, V value, Node<K,V> next) {
    return new TreeNode<>(hash, key, value, next);
    }

    // For treeifyBin
    TreeNode<K,V> replacementTreeNode(Node<K,V> p, Node<K,V> next) {
    return new TreeNode<>(p.hash, p.key, p.value, next);
    }

    /**

    • Reset to initial default state. Called by clone and readObject.
      */
      void reinitialize() {
      table = null;
      entrySet = null;
      keySet = null;
      values = null;
      modCount = 0;
      threshold = 0;
      size = 0;
      }

    // Callbacks to allow LinkedHashMap post-actions
    void afterNodeAccess(Node<K,V> p) { }
    void afterNodeInsertion(boolean evict) { }
    void afterNodeRemoval(Node<K,V> p) { }

    // Called only from writeObject, to ensure compatible ordering.
    void internalWriteEntries(java.io.ObjectOutputStream s) throws IOException {
    Node<K,V>[] tab;
    if (size > 0 && (tab = table) != null) {
    for (int i = 0; i < tab.length; ++i) {
    for (Node<K,V> e = tab[i]; e != null; e = e.next) {
    s.writeObject(e.key);
    s.writeObject(e.value);
    }
    }
    }
    }

    /* ------------------------------------------------------------ */
    // Tree bins

    /**

    • Entry for Tree bins. Extends LinkedHashMap.Entry (which in turn

    • extends Node) so can be used as extension of either regular or

    • linked node.
      */
      static final class TreeNode<K,V> extends LinkedHashMap.LinkedHashMapEntry<K,V> {
      TreeNode<K,V> parent; // red-black tree links
      TreeNode<K,V> left;
      TreeNode<K,V> right;
      TreeNode<K,V> prev; // needed to unlink next upon deletion
      boolean red;
      TreeNode(int hash, K key, V val, Node<K,V> next) {
      super(hash, key, val, next);
      }

      /**

      • Returns root of tree containing this node.
        */
        final TreeNode<K,V> root() {
        for (TreeNode<K,V> r = this, p;😉 {
        if ((p = r.parent) == null)
        return r;
        r = p;
        }
        }

      /**

      • Ensures that the given root is the first node of its bin.
        */
        static <K,V> void moveRootToFront(Node<K,V>[] tab, TreeNode<K,V> root) {
        int n;
        if (root != null && tab != null && (n = tab.length) > 0) {
        int index = (n - 1) & root.hash;
        TreeNode<K,V> first = (TreeNode<K,V>)tab[index];
        if (root != first) {
        Node<K,V> rn;
        tab[index] = root;
        TreeNode<K,V> rp = root.prev;
        if ((rn = root.next) != null)
        ((TreeNode<K,V>)rn).prev = rp;
        if (rp != null)
        rp.next = rn;
        if (first != null)
        first.prev = root;
        root.next = first;
        root.prev = null;
        }
        assert checkInvariants(root);
        }
        }

      /**

      • Finds the node starting at root p with the given hash and key.
      • The kc argument caches comparableClassFor(key) upon first use
      • comparing keys.
        */
        final TreeNode<K,V> find(int h, Object k, Class<?> kc) {
        TreeNode<K,V> p = this;
        do {
        int ph, dir; K pk;
        TreeNode<K,V> pl = p.left, pr = p.right, q;
        if ((ph = p.hash) > h)
        p = pl;
        else if (ph < h)
        p = pr;
        else if ((pk = p.key) == k || (k != null && k.equals(pk)))
        return p;
        else if (pl == null)
        p = pr;
        else if (pr == null)
        p = pl;
        else if ((kc != null ||
        (kc = comparableClassFor(k)) != null) &&
        (dir = compareComparables(kc, k, pk)) != 0)
        p = (dir < 0) ? pl : pr;
        else if ((q = pr.find(h, k, kc)) != null)
        return q;
        else
        p = pl;
        } while (p != null);
        return null;
        }

      /**

      • Calls find for root node.
        */
        final TreeNode<K,V> getTreeNode(int h, Object k) {
        return ((parent != null) ? root() : this).find(h, k, null);
        }

      /**

      • Tie-breaking utility for ordering insertions when equal
      • hashCodes and non-comparable. We don’t require a total
      • order, just a consistent insertion rule to maintain
      • equivalence across rebalancings. Tie-breaking further than
      • necessary simplifies testing a bit.
        */
        static int tieBreakOrder(Object a, Object b) {
        int d;
        if (a == null || b == null ||
        (d = a.getClass().getName().
        compareTo(b.getClass().getName())) == 0)
        d = (System.identityHashCode(a) <= System.identityHashCode(b) ?
        -1 : 1);
        return d;
        }

      /**

      • Forms tree of the nodes linked from this node.

      • @return root of tree
        */
        final void treeify(Node<K,V>[] tab) {
        TreeNode<K,V> root = null;
        for (TreeNode<K,V> x = this, next; x != null; x = next) {
        next = (TreeNode<K,V>)x.next;
        x.left = x.right = null;
        if (root == null) {
        x.parent = null;
        x.red = false;
        root = x;
        }
        else {
        K k = x.key;
        int h = x.hash;
        Class<?> kc = null;
        for (TreeNode<K,V> p = root;😉 {
        int dir, ph;
        K pk = p.key;
        if ((ph = p.hash) > h)
        dir = -1;
        else if (ph < h)
        dir = 1;
        else if ((kc == null &&
        (kc = comparableClassFor(k)) == null) ||
        (dir = compareComparables(kc, k, pk)) == 0)
        dir = tieBreakOrder(k, pk);

                 TreeNode<K,V> xp = p;
                 if ((p = (dir <= 0) ? p.left : p.right) == null) {
                     x.parent = xp;
                     if (dir <= 0)
                         xp.left = x;
                     else
                         xp.right = x;
                     root = balanceInsertion(root, x);
                     break;
                 }
             }
         }
        

        }
        moveRootToFront(tab, root);
        }

      /**

      • Returns a list of non-TreeNodes replacing those linked from
      • this node.
        */
        final Node<K,V> untreeify(HashMap<K,V> map) {
        Node<K,V> hd = null, tl = null;
        for (Node<K,V> q = this; q != null; q = q.next) {
        Node<K,V> p = map.replacementNode(q, null);
        if (tl == null)
        hd = p;
        else
        tl.next = p;
        tl = p;
        }
        return hd;
        }

      /**

      • Tree version of putVal.
        */
        final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab,
        int h, K k, V v) {
        Class<?> kc = null;
        boolean searched = false;
        TreeNode<K,V> root = (parent != null) ? root() : this;
        for (TreeNode<K,V> p = root;😉 {
        int dir, ph; K pk;
        if ((ph = p.hash) > h)
        dir = -1;
        else if (ph < h)
        dir = 1;
        else if ((pk = p.key) == k || (k != null && k.equals(pk)))
        return p;
        else if ((kc == null &&
        (kc = comparableClassFor(k)) == null) ||
        (dir = compareComparables(kc, k, pk)) == 0) {
        if (!searched) {
        TreeNode<K,V> q, ch;
        searched = true;
        if (((ch = p.left) != null &&
        (q = ch.find(h, k, kc)) != null) ||
        ((ch = p.right) != null &&
        (q = ch.find(h, k, kc)) != null))
        return q;
        }
        dir = tieBreakOrder(k, pk);
        }

         TreeNode<K,V> xp = p;
         if ((p = (dir <= 0) ? p.left : p.right) == null) {
             Node<K,V> xpn = xp.next;
             TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn);
             if (dir <= 0)
                 xp.left = x;
             else
                 xp.right = x;
             xp.next = x;
             x.parent = x.prev = xp;
             if (xpn != null)
                 ((TreeNode<K,V>)xpn).prev = x;
             moveRootToFront(tab, balanceInsertion(root, x));
             return null;
         }
        

        }
        }

      /**

      • Removes the given node, that must be present before this call.

      • This is messier than typical red-black deletion code because we

      • cannot swap the contents of an interior node with a leaf

      • successor that is pinned by “next” pointers that are accessible

      • independently during traversal. So instead we swap the tree

      • linkages. If the current tree appears to have too few nodes,

      • the bin is converted back to a plain bin. (The test triggers

      • somewhere between 2 and 6 nodes, depending on tree structure).
        */
        final void removeTreeNode(HashMap<K,V> map, Node<K,V>[] tab,
        boolean movable) {
        int n;
        if (tab == null || (n = tab.length) == 0)
        return;
        int index = (n - 1) & hash;
        TreeNode<K,V> first = (TreeNode<K,V>)tab[index], root = first, rl;
        TreeNode<K,V> succ = (TreeNode<K,V>)next, pred = prev;
        if (pred == null)
        tab[index] = first = succ;
        else
        pred.next = succ;
        if (succ != null)
        succ.prev = pred;
        if (first == null)
        return;
        if (root.parent != null)
        root = root.root();
        if (root == null || root.right == null ||
        (rl = root.left) == null || rl.left == null) {
        tab[index] = first.untreeify(map); // too small
        return;
        }
        TreeNode<K,V> p = this, pl = left, pr = right, replacement;
        if (pl != null && pr != null) {
        TreeNode<K,V> s = pr, sl;
        while ((sl = s.left) != null) // find successor
        s = sl;
        boolean c = s.red; s.red = p.red; p.red = c; // swap colors
        TreeNode<K,V> sr = s.right;
        TreeNode<K,V> pp = p.parent;
        if (s == pr) { // p was s’s direct parent
        p.parent = s;
        s.right = p;
        }
        else {
        TreeNode<K,V> sp = s.parent;
        if ((p.parent = sp) != null) {
        if (s == sp.left)
        sp.left = p;
        else
        sp.right = p;
        }
        if ((s.right = pr) != null)
        pr.parent = s;
        }
        p.left = null;
        if ((p.right = sr) != null)
        sr.parent = p;
        if ((s.left = pl) != null)
        pl.parent = s;
        if ((s.parent = pp) == null)
        root = s;
        else if (p == pp.left)
        pp.left = s;
        else
        pp.right = s;
        if (sr != null)
        replacement = sr;
        else
        replacement = p;
        }
        else if (pl != null)
        replacement = pl;
        else if (pr != null)
        replacement = pr;
        else
        replacement = p;
        if (replacement != p) {
        TreeNode<K,V> pp = replacement.parent = p.parent;
        if (pp == null)
        root = replacement;
        else if (p == pp.left)
        pp.left = replacement;
        else
        pp.right = replacement;
        p.left = p.right = p.parent = null;
        }

        TreeNode<K,V> r = p.red ? root : balanceDeletion(root, replacement);

        if (replacement == p) { // detach
        TreeNode<K,V> pp = p.parent;
        p.parent = null;
        if (pp != null) {
        if (p == pp.left)
        pp.left = null;
        else if (p == pp.right)
        pp.right = null;
        }
        }
        if (movable)
        moveRootToFront(tab, r);
        }

      /**

      • Splits nodes in a tree bin into lower and upper tree bins,

      • or untreeifies if now too small. Called only from resize;

      • see above discussion about split bits and indices.

      • @param map the map

      • @param tab the table for recording bin heads

      • @param index the index of the table being split

      • @param bit the bit of hash to split on
        */
        final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) {
        TreeNode<K,V> b = this;
        // Relink into lo and hi lists, preserving order
        TreeNode<K,V> loHead = null, loTail = null;
        TreeNode<K,V> hiHead = null, hiTail = null;
        int lc = 0, hc = 0;
        for (TreeNode<K,V> e = b, next; e != null; e = next) {
        next = (TreeNode<K,V>)e.next;
        e.next = null;
        if ((e.hash & bit) == 0) {
        if ((e.prev = loTail) == null)
        loHead = e;
        else
        loTail.next = e;
        loTail = e;
        ++lc;
        }
        else {
        if ((e.prev = hiTail) == null)
        hiHead = e;
        else
        hiTail.next = e;
        hiTail = e;
        ++hc;
        }
        }

        if (loHead != null) {
        if (lc <= UNTREEIFY_THRESHOLD)
        tab[index] = loHead.untreeify(map);
        else {
        tab[index] = loHead;
        if (hiHead != null) // (else is already treeified)
        loHead.treeify(tab);
        }
        }
        if (hiHead != null) {
        if (hc <= UNTREEIFY_THRESHOLD)
        tab[index + bit] = hiHead.untreeify(map);
        else {
        tab[index + bit] = hiHead;
        if (loHead != null)
        hiHead.treeify(tab);
        }
        }
        }

      /* ------------------------------------------------------------ */
      // Red-black tree methods, all adapted from CLR

      static <K,V> TreeNode<K,V> rotateLeft(TreeNode<K,V> root,
      TreeNode<K,V> p) {
      TreeNode<K,V> r, pp, rl;
      if (p != null && (r = p.right) != null) {
      if ((rl = p.right = r.left) != null)
      rl.parent = p;
      if ((pp = r.parent = p.parent) == null)
      (root = r).red = false;
      else if (pp.left == p)
      pp.left = r;
      else
      pp.right = r;
      r.left = p;
      p.parent = r;
      }
      return root;
      }

      static <K,V> TreeNode<K,V> rotateRight(TreeNode<K,V> root,
      TreeNode<K,V> p) {
      TreeNode<K,V> l, pp, lr;
      if (p != null && (l = p.left) != null) {
      if ((lr = p.left = l.right) != null)
      lr.parent = p;
      if ((pp = l.parent = p.parent) == null)
      (root = l).red = false;
      else if (pp.right == p)
      pp.right = l;
      else
      pp.left = l;
      l.right = p;
      p.parent = l;
      }
      return root;
      }

      static <K,V> TreeNode<K,V> balanceInsertion(TreeNode<K,V> root,
      TreeNode<K,V> x) {
      x.red = true;
      for (TreeNode<K,V> xp, xpp, xppl, xppr;😉 {
      if ((xp = x.parent) == null) {
      x.red = false;
      return x;
      }
      else if (!xp.red || (xpp = xp.parent) == null)
      return root;
      if (xp == (xppl = xpp.left)) {
      if ((xppr = xpp.right) != null && xppr.red) {
      xppr.red = false;
      xp.red = false;
      xpp.red = true;
      x = xpp;
      }
      else {
      if (x == xp.right) {
      root = rotateLeft(root, x = xp);
      xpp = (xp = x.parent) == null ? null : xp.parent;
      }
      if (xp != null) {
      xp.red = false;
      if (xpp != null) {
      xpp.red = true;
      root = rotateRight(root, xpp);
      }
      }
      }
      }
      else {
      if (xppl != null && xppl.red) {
      xppl.red = false;
      xp.red = false;
      xpp.red = true;
      x = xpp;
      }
      else {
      if (x == xp.left) {
      root = rotateRight(root, x = xp);
      xpp = (xp = x.parent) == null ? null : xp.parent;
      }
      if (xp != null) {
      xp.red = false;
      if (xpp != null) {
      xpp.red = true;
      root = rotateLeft(root, xpp);
      }
      }
      }
      }
      }
      }

      static <K,V> TreeNode<K,V> balanceDeletion(TreeNode<K,V> root,
      TreeNode<K,V> x) {
      for (TreeNode<K,V> xp, xpl, xpr;😉 {
      if (x == null || x == root)
      return root;
      else if ((xp = x.parent) == null) {
      x.red = false;
      return x;
      }
      else if (x.red) {
      x.red = false;
      return root;
      }
      else if ((xpl = xp.left) == x) {
      if ((xpr = xp.right) != null && xpr.red) {
      xpr.red = false;
      xp.red = true;
      root = rotateLeft(root, xp);
      xpr = (xp = x.parent) == null ? null : xp.right;
      }
      if (xpr == null)
      x = xp;
      else {
      TreeNode<K,V> sl = xpr.left, sr = xpr.right;
      if ((sr == null || !sr.red) &&
      (sl == null || !sl.red)) {
      xpr.red = true;
      x = xp;
      }
      else {
      if (sr == null || !sr.red) {
      if (sl != null)
      sl.red = false;
      xpr.red = true;
      root = rotateRight(root, xpr);
      xpr = (xp = x.parent) == null ?
      null : xp.right;
      }
      if (xpr != null) {
      xpr.red = (xp == null) ? false : xp.red;
      if ((sr = xpr.right) != null)
      sr.red = false;
      }
      if (xp != null) {
      xp.red = false;
      root = rotateLeft(root, xp);
      }
      x = root;
      }
      }
      }
      else { // symmetric
      if (xpl != null && xpl.red) {
      xpl.red = false;
      xp.red = true;
      root = rotateRight(root, xp);
      xpl = (xp = x.parent) == null ? null : xp.left;
      }
      if (xpl == null)
      x = xp;
      else {
      TreeNode<K,V> sl = xpl.left, sr = xpl.right;
      if ((sl == null || !sl.red) &&
      (sr == null || !sr.red)) {
      xpl.red = true;
      x = xp;
      }
      else {
      if (sl == null || !sl.red) {
      if (sr != null)
      sr.red = false;
      xpl.red = true;
      root = rotateLeft(root, xpl);
      xpl = (xp = x.parent) == null ?
      null : xp.left;
      }
      if (xpl != null) {
      xpl.red = (xp == null) ? false : xp.red;
      if ((sl = xpl.left) != null)
      sl.red = false;
      }
      if (xp != null) {
      xp.red = false;
      root = rotateRight(root, xp);
      }
      x = root;
      }
      }
      }
      }
      }

      /**

      • Recursive invariant check
        */
        static <K,V> boolean checkInvariants(TreeNode<K,V> t) {
        TreeNode<K,V> tp = t.parent, tl = t.left, tr = t.right,
        tb = t.prev, tn = (TreeNode<K,V>)t.next;
        if (tb != null && tb.next != t)
        return false;
        if (tn != null && tn.prev != t)
        return false;
        if (tp != null && t != tp.left && t != tp.right)
        return false;
        if (tl != null && (tl.parent != t || tl.hash > t.hash))
        return false;
        if (tr != null && (tr.parent != t || tr.hash < t.hash))
        return false;
        if (t.red && tl != null && tl.red && tr != null && tr.red)
        return false;
        if (tl != null && !checkInvariants(tl))
        return false;
        if (tr != null && !checkInvariants(tr))
        return false;
        return true;
        }
        }

}

到了这里,关于HashMap原理(三):容量、加载因子、大小的含义的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • Java HashMap初始化大小设置多少合适

    修改公司老代码的时候,发现阿里编码规约插件提示HashMap初始化时尽量指定初始值大小,因为设置合理的初始值可以提升性能:   HashMap继承自AbstractMap类,实现了Map、Cloneable、java.io.Serializable接口 , 是基于散列表实现的双列集合,它存储的是key-value键值对映射,每个key-va

    2024年02月09日
    浏览(51)
  • [1180]clickhouse查看数据库和表的容量大小

    在mysql中information_schema这个数据库中保存了mysql服务器所有数据库的信息, 而在clickhouse,我们可以通过system.parts查看clickhouse数据库和表的容量大小、行数、压缩率以及分区信息。 在此通过测试数据库来说明。 结果为:这种结果显示的大小size是字节,我们如何转换为常见的

    2024年02月05日
    浏览(55)
  • mysql语句查看数据库表所占容量空间大小

         六. 查看所有产生碎片的表 查看某个表的碎片大小 SHOW TABLE STATUS LIKE \\\'表名\\\'; 查询结果中的\\\'Data_free\\\'字段的值就是碎片大小。 7. 清理表碎片

    2024年02月11日
    浏览(42)
  • HashMap的底层实现原理

    HashMap 在 JDK1.8 之前的实现方式:数组+链表 JDK1.8之后的实现方式:数组+链表+红黑树 原理: 当你 new 一个 HashMap() 的时候,它底层并没有创建数组。 / 只有当你首次调用 put() 方法时,底层就会创建一个长度为16的数组 / 用数组容量大小乘以加载因子得到一个阈值,一旦数组中

    2024年02月13日
    浏览(57)
  • HashMap原理

    数据结构:HashMap 其实是一个对象数组,数据结构采用的是链表( 数组+链表 )。当链表长度大于 8个 的时候,会切换成红黑树,如果红黑树长度小于 6 个会回退到链表。 HashMap 通过put()和get()方法储存和获取对象,是先计算 Key 对象的 hashCode 值,因为 hashCode 的值比较大,所以

    2024年02月03日
    浏览(35)
  • 重温HashMap底层原理

    目录 1.HashMap概述 2.JDK7与JDK8的HashMap区别 3.HashMap的主要方法分析 4.常见问题分析总结 HashMap是使用频率最高的用于映射键值对(key和value)处理的数据类型。随着JDK版本的跟新,JDK1.8对HashMap底层的实现进行了优化,列入引入红黑树的数据结构和扩容的优化等。本文结合JDK1.7和JDK

    2024年02月13日
    浏览(77)
  • 因子图优化原理(iSAM、iSAM2论文解析)

    在介绍因子图之前,先从一个简单的slam问题入手,如下图所示: 在图中清晰的显示了各个节点和和连接结点边之间的定义,对于图结构不做过多说明,假定读者已经有一定的先验知识。在上图显示的slam问题中,实际上我们要解决的问题就是,从机器人的控制和观测中恢复机

    2024年02月02日
    浏览(50)
  • HashMap原理详解及常用API介绍

    如果你准备面试Java后端方向的工程师,那么HashMap可以说是面试中的重中之重,你去10家公司面试,可能8家公司都会会问道。所以可见HashMap相关的知识点对于我们面试来说有多么重要,接下来就让我们走进HashMap的大门吧! HashMap是Java当中一种数据结构,是一个用于存储Key-V

    2024年02月16日
    浏览(35)
  • 【深入挖掘Java技术】「源码原理体系」盲点问题解析之HashMap工作原理全揭秘(下)

    在阅读了上篇文章《【深入挖掘Java技术】「源码原理体系」盲点问题解析之HashMap工作原理全揭秘(上)》之后,相信您对HashMap的基本原理和基础结构已经有了初步的认识。接下来,我们将进一步深入探索HashMap的源码,揭示其深层次的技术细节。通过这次解析,您将更深入地

    2024年02月01日
    浏览(47)
  • 云计算:从基础架构原理到最佳实践之:云计算容量规划与预测

    作者:禅与计算机程序设计艺术 云计算是当前IT领域的一个热门词汇,特别是在互联网公司崛起、移动互联网蓬勃发展的当下,云计算带动了大数据、机器学习、人工智能等新兴技术的迅速发展。随着云计算的不断发展,各种云产品的出现也促使越来越多的企业转向或考虑云

    2024年02月08日
    浏览(63)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包