TensorFlow 1.14.zero

Main Options and Enhancements

  • That is the primary 1.x launch containing the compat.v2 module. This module is required to permit libraries to publish code which works in each 1.x and a couple of.x. After this launch, no backwards incompatible modifications are allowed within the 2.zero Python API.
  • Activate MKL-DNN contraction kernels by default. MKL-DNN dynamically dispatches the most effective kernel implementation based mostly on CPU vector structure. To disable them, construct with –define=tensorflow_mkldnn_contraction_kernel=zero.
  • Non-Home windows system libraries are actually versioned. This needs to be a no-op for many customers because it impacts solely system package deal maintainers or these constructing extensions to TensorFlow:
    • Python wheels (Pip packages) include one library file.
      • Linux: libtensorflow_framework.so.1
      • MacOS: libtensorflow_framework.1.dylib
    • Our libtensorflow tarball archives include the libtensorflow library and two symlinks. MacOS .dylib libraries are the identical, however match MacOS library naming necessities (i.e. libtensorflow.1.dylib):
      • libtensorflow.so.1.14.zero, the principle library
      • libtensorflow.so.1, symlinked to the principle library
      • libtensorflow.so, symlinked to .so.1

Behavioral modifications

  • Set default loss discount as AUTO for bettering reliability of loss scaling with distribution technique and customized coaching loops. AUTO signifies that the discount choice shall be decided by the utilization context. For nearly all circumstances this defaults to SUM_OVER_BATCH_SIZE. When utilized in distribution technique scope, outdoors of built-in coaching loops equivalent to tf.keras compile and match, we anticipate discount worth to be ‘None’ or ‘SUM’. Utilizing different values will elevate an error.
  • Wraps losses handed to the compile API (strings and v1 losses) which aren’t cases of v2 Loss class in LossWrapper class. => All losses will now use SUM_OVER_BATCH_SIZE discount as default.
  • Disable run_eagerly and distribution technique if there are symbolic tensors added to the mannequin utilizing add_metric or add_loss.
  • tf.linspace(begin, cease, num) now all the time makes use of “cease” as final worth (for num > 1)
  • The habits of tf.collect is now appropriate when axis=None and batch_dims
  • Solely create a GCS listing object if the article doesn’t exist already.
  • In map_vectorization optimization, scale back the diploma of parallelism within the vectorized map node.
  • Bug repair: loss and gradients ought to now extra reliably be accurately scaled w.r.t. the worldwide batch measurement when utilizing a tf.distribute.Technique.
  • Updating cosine similarity loss – eliminated the negate signal from cosine similarity.
  • DType is now not convertible to an int. Use dtype.as_datatype_enum as a substitute of int(dtype) to get the identical end result.
  • Modified default for gradient accumulation for TPU embeddings to true.
  • Callbacks now log values in keen mode when a deferred construct mannequin is used.
  • Transitive dependencies on :pooling_ops had been eliminated. Some customers may have so as to add express dependencies on :pooling_ops in the event that they reference the operators from that library.

Bug Fixes and Different Modifications

  • Documentation
  • Deprecations and Image renames.
    • The GPU configuration env parameter TF_CUDA_HOST_MEM_LIMIT_IN_MB has been modified to TF_GPU_HOST_MEM_LIMIT_IN_MB.
    • Take away unused StringViewVariantWrapper
    • Delete unused Fingerprint64Map op registration
    • SignatureDef util capabilities have been deprecated.
    • Renamed tf.picture capabilities to take away duplicate “picture” the place it’s redundant.
    • tf.keras.experimental.export renamed to tf.keras.experimental.export_saved_model
    • Standardize the LayerNormalization API by changing the args norm_axis and params_axis with axis.
    • Tensor::UnsafeCopyFromInternal deprecated in favor Tensor::BitcastFrom
  • Keras & Python API
    • Add v2 module aliases for:
      • tf.initializers => tf.keras.initializers
      • tf.losses => tf.keras.losses & tf.metrics => tf.keras.metrics
      • tf.optimizers => tf.keras.optimizers
    • Add tf.keras.layers.AbstractRNNCell as the popular implementation of RNN cell for TF v2. Person can use it to implement RNN cell with customized habits.
    • Including clear_losses API to have the ability to clear losses on the finish of ahead cross in a customized coaching loop in keen.
    • Add help for passing listing of lists to the metrics param in Keras compile.
    • Added top-k to precision and recall to keras metrics.
    • Including public APIs for cumsum and cumprod keras backend capabilities.
    • Repair: mannequin.add_loss(symbolic_tensor) ought to work in ambient keen.
    • Add identify argument to tf.string_split and tf.strings_split
    • Minor change to SavedModels exported from Keras utilizing tf.keras.experimental.export. (SignatureDef key for analysis mode is now “eval” as a substitute of “take a look at”). This shall be reverted again to “take a look at” within the close to future.
    • Updates binary cross entropy logic in Keras when enter is chances. As a substitute of changing chances to logits, we’re utilizing the cross entropy system for chances.
    • Uncooked TensorFlow capabilities can now be used together with the Keras Purposeful API throughout mannequin creation. This obviates the necessity for customers to create Lambda layers usually when utilizing the Purposeful API. Like Lambda layers, TensorFlow capabilities that lead to Variable creation or assign ops aren’t supported.
    • Keras coaching and validation curves are proven on the identical plot.
    • Introduce dynamic constructor argument in Layer and Mannequin, which needs to be set to True when utilizing crucial management circulate within the name technique.
    • Eradicating of dtype within the constructor of initializers and partition_info in name.
  • New ops and improved op performance
    • Add OpKernels for some stateless maps
    • Add v2 APIs for AUCCurve and AUCSummationMethod enums. #tf-metrics-convergence
    • Add tf.math.nextafter op.
    • Add CompositeTensor base class.
    • Add tf.linalg.tridiagonal_solve op.
    • Add opkernel templates for frequent desk operations.
    • Added GPU implementation of tf.linalg.tridiagonal_solve.
    • Added help for TFLite in TensorFlow 2.zero.
    • Provides abstract hint API for amassing graph and profile data.
    • Add batch_dims argument to tf.collect.
    • Add help for add_metric within the graph operate mode.
    • Add C++ Gradient for BatchMatMulV2.
    • Added tf.random.binomial
    • Added gradient for SparseToDense op.
    • Add legacy string flat hash map op kernels
    • Add a ragged measurement op and register it to the op dispatcher
    • Add broadcasting help to tf.matmul.
    • Add ellipsis (…) help for tf.einsum()
    • Added LinearOperator.adjoint and LinearOperator.H (alias).
    • Added GPU implementation of tf.linalg.tridiagonal_solve.
    • Added strings.byte_split
    • Add RaggedTensor.placeholder()
    • Add a brand new “result_type” parameter to tf.strings.break up
    • add_update can now be handed a zero-arg callable with a purpose to help turning off the replace when setting trainable=False on a Layer of a Mannequin compiled with run_eagerly=True.
    • Add variant wrapper for absl::string_view
    • Add expand_composites argument to all nest.* strategies.
    • Add pfor converter for Squeeze.
    • Bug repair for tf.tile gradient
    • Expose CriticalSection in core as tf.CriticalSection.
    • Replace Fingerprint64Map to make use of aliases
    • ResourceVariable help for gather_nd.
    • ResourceVariable’s collect op helps batch dimensions.
    • Variadic scale back is supported on CPU
    • Prolong tf.operate with primary help for CompositeTensors arguments (equivalent to SparseTensor and RaggedTensor).
    • Add templates and interfaces for creating lookup tables
    • Submit-training quantization device helps quantizing weights shared by a number of operations. The fashions made with variations of this device will use INT8 varieties for weights and can solely be executable interpreters from this model onwards.
    • Malformed gif pictures may lead to an entry out of bounds within the coloration palette of the body. This has been fastened now
    • picture.resize now considers correct pixel facilities and has new kernels (incl. anti-aliasing).
  • Efficiency
    • Activate MKL-DNN contraction kernels by default. MKL-DNN dynamically dispatches the most effective kernel implementation based mostly on CPU vector structure. To disable them, construct with –define=tensorflow_mkldnn_contraction_kernel=zero.
    • Assist for multi-host ncclAllReduce in Distribution Technique.
    • Expose a flag that permits the variety of threads to range throughout Python benchmarks.
  • TensorFlow 2.zero Improvement
    • Add v2 sparse categorical crossentropy metric.
    • Permit non-Tensors by v2 losses.
    • Add UnifiedGRU as the brand new GRU implementation for tf2.zero. Change the default recurrent activation operate for GRU from ‘hard_sigmoid’ to ‘sigmoid’, and ‘reset_after’ to True in 2.zero. Traditionally recurrent activation is ‘hard_sigmoid’ since it’s quick than ‘sigmoid’. With new unified backend between CPU and GPU mode, for the reason that CuDNN kernel is utilizing sigmoid, we modify the default for CPU mode to sigmoid as effectively. With that, the default GRU shall be suitable with each CPU and GPU kernel. It will allow consumer with GPU to make use of CuDNN kernel by default and get a 10x efficiency enhance in coaching. Observe that that is checkpoint breaking change. If consumer wish to use their 1.x pre-trained checkpoint, please assemble the layer with GRU(recurrent_activation=’hard_sigmoid’, reset_after=False) to fallback to 1.x habits.
    • TF 2.zero – Replace metric identify to all the time mirror what the consumer has given in compile. Impacts following circumstances 1. When identify is given as ‘accuracy’/’crossentropy’ 2. When an aliased operate identify is used eg. ‘mse’ three. Eradicating the weighted prefix from weighted metric names.
    • Start including Go wrapper for C Keen API
    • picture.resize in 2.zero now helps gradients for the brand new resize kernels.
    • eliminated tf.string_split from v2 API
    • Expose tf.contrib.proto.* ops in tf.io (they’ll exist in TF2)
    • “Updates the TFLiteConverter API in 2.zero. Modifications from_concrete_function to from_concrete_functions.”
    • Allow tf.distribute.experimental.MultiWorkerMirroredStrategy working in keen mode.
    • Assist each binary and -1/1 label enter in v2 hinge and squared hinge losses.
  • TensorFlow Lite
    • “Provides help for tflite_convert in 2.zero.”
    • “Take away lite.OpHint, lite.experimental, and lite.fixed from 2.zero API.”
  • tf.contrib
  • tf.knowledge
    • Add num_parallel_reads and passing in a Dataset containing filenames into TextLineDataset and FixedLengthRecordDataset
    • Going ahead we function in TF 2.zero, this variation is a part of the hassle to slowly changing XYZDataset to DatasetV2 kind which is the official model going for use in TF 2.zero and motivated by some compatibility difficulty discovered, _BigtableXYZDataset (of kind DatasetV2) doesn’t implement the _as_variant_tensor() of DatasetV1, when shifting contrib.bigtable to tensorflow_io. Changing into DatasetV2 removes the overheads to keep up V1 whereas we’re shifting into TF 2.zero.
    • Add dataset ops to the graph (or create kernels in Keen execution) in the course of the python Dataset object creation as a substitute doing it throughout Iterator creation time.
    • Add help for TensorArrays to tf.knowledge Dataset.
    • Switching tf.knowledge capabilities to make use of defun, offering an escape hatch to proceed utilizing the legacy Defun.
  • Toolchains
    • CUDNN_INSTALL_PATH, TENSORRT_INSTALL_PATH, NCCL_INSTALL_PATH, NCCL_HDR_PATH are deprecated. Use TF_CUDA_PATHS as a substitute which helps a comma-separated listing of base paths which might be searched to seek out CUDA libraries and headers.
    • TF code now resides in tensorflow_core and tensorflow is only a digital pip package deal. No code modifications are wanted for initiatives utilizing TensorFlow, the change is clear
  • XLA
    • XLA HLO graphs could be inspected with interactive_graphviz device now.
    • Provides Philox help to new stateful RNG’s XLA path.
  • Estimator
    • Use tf.compat.v1.estimator.inputs as a substitute of tf.estimator.inputs
    • Exchange contrib references with tf.estimator.experimental.* for APIs in early_stopping.py
    • Figuring out the “appropriate” worth of the --iterations_per_loop for TPUEstimator or DistributionStrategy continues to be a problem for our customers. We suggest dynamically tuning the --iterations_per_loop variable, particularly for utilizing TPUEstimator in coaching mode, based mostly on a consumer goal TPU execution time. Customers may specify a price equivalent to: --iterations_per_loop=300s, which is able to lead to roughly 300 seconds being spent on the TPU between host facet operations.

Due to our Contributors

This launch accommodates contributions from many individuals at Google, in addition to:

1e100, 4d55397500, a6802739, abenmao, Adam Richter, Adam Weiss, Ag Ramesh, Alan Du, Albin Pleasure, Alex, Aman Patel, Amit, Amit Kumar Jaiswal, Amit Srivastava, Andreas Eberle, Andy Craze, Anthony Hsu, Anthony Platanios, Anuj Rawat, Armen Poghosov, armenpoghosov, arp95, Arpit Shah, Ashwin Ramaswami, Augustina Ragwitz, Aurelien Geron, AuréLien Geron, avasid, aweers, awesomealex1, Ayush Agrawal, Bayberry Z, Ben Barsdell, Bharat Raghunathan, Bhavani Subramanian, Bin Fan, blairhan, BléNesi Attila, Bodin-E, Brandon Carter, sweet.dc, Cheng Chang, Chao Liu, chenchc, chie8842, Christian Hansen, Christian Sigg, Christoph Boeddeker, Clayne Robison, crafet, csukuangfj, ctiijima, Dan Jarvis, Dan Lazewatsky, Daniel Ingram, Daniel Rasmussen, Daniel Salvadori, Dave Airlie, David Norman, Dayananda V, Dayananda-V, delock, Denis Khalikov, Deven Desai, Dheeraj Rajaram Reddy, dmitrievanthony, Donovan Ong, Drew Szurko, Duncan Riach, Dustin Neighly, Edward Forgacs, EFanZh, Evgeniy Polyakov, Fangjun Kuang, Federico Martinez, Fei Hu, Felix Lemke, Filip Matzner, fo40225, Fred Reiss, Gautam, gehring, Geoffrey Irving, George Sterpu, Grzegorz George Pawelczak, Grzegorz Pawelczak, Gurpreet Singh, Gyoung-Yoon Ryoo, HanGuo97, Hanton Yang, Hari Shankar, hehongliang, Heungsub Lee, Hoeseong Kim, Huan Li (李卓桓), I-Hong Jhuo, Ilango R, Innovimax, Irene Dea, Jacky Ko, Jakub Lipinski, Jason Zaman, jcf94, Jeffrey Poznanovic, Jens Elofsson, Jeroen BéDorf, jhalakp, Jia Qingtong, Jiankang, Joe Q, Joe Quadrino, Joeran Beel, Jonas Rauber, Jonathan, Jonathan Kyl, Joppe Geluykens, Joseph Friedman, jtressle, jwu, Okay Yasaswi Sri Chandra Gandhi, Okay. Hodges, Kaixi Hou, Karl Lessard, Karl Weinmeister, Karthik Muthuraman, Kashif Rasul, KDR, Keno Fischer, Kevin Mader, Kilaru Yasaswi Sri Chandra Gandhi, kjopek, Koan-Sin Tan, kouml, ktaebum, Lakshay Tokas, Laurent Le Brun, Letian Kang, Li, Guizi, Lavatory Rong Jie, Lucas Hendren, Lukas Geiger, Luke Han, luxupu, lvli, Ma, Guokai, Mahmoud Abuzaina, Maksym Kysylov, Mandar Deshpande, manhyuk, Marco Gaido, Marek Drozdowski, Mark Collier, Mark Ryan, mars20, Mateusz Chudyk, Matt Conley, MattConley, mbhuiyan, mdfaijul, Melissa Grueter, Michael KäUfl, MickaëL Schoentgen, Miguel Morin, Mihail Salnikov, Mikalai Drabovich, Mike Arpaia, Mike Holcomb, monklof, Moses Marin, Mr. Metallic, Mshr-H, nammbash, Natalia Gimelshein, Nathan Luehr, Nayana-Ibm, neargye, Neeraj Pradhan, Nehal J Wani, Nick, Nick Lewycky, Niels Ole Salscheider, Niranjan Hasabnis, nlewycky, Nuka-137, Nutti, olicht, omeir1, P Sudeepam, Palmer Lao, Pan Daoxin, Pariksheet Pinjari, Pasquale Minervini, Pavel Akhtyamov, Pavel Samolysov, PENGWA, Philipp Jund, Pooya Davoodi, Pranav Marathe, R S Nikhil Krishna, Rohit Gupta, Roland Zimmermann, Roman Soldatow, rthadur, Ruizhe, Ryan Jiang, saishruthi, Samantha Andow, Sami Kama, Sana-Damani, Saurabh Deoras, sdamani, Sean Morgan, seanshpark, Sebastien Iooss, Sergii Khomenko, Serv-Inc, Shahzad Lone, Shashank Gupta, Shashi, shashvat, Shashvat Chand Shahi, Siju, Siju Samuel, Snease-Abq, Spencer Schaber, sremedios, srinivasan.narayanamoorthy, Steve Lang, Steve Nesae, Subin, Sumesh Udayakumaran, sunway513, Supriya Rao, sxwang, Takeo Sawada, Taylor Jakobson, Taylor Thornton, Ted Chang, ThisIsPIRI, Thomas Deegan, Thomas Hagebols, tianyapiaozi, Tim Zaman, tomguluson92, Tongxuan Liu, Trent Lo, TungJerry, Tyorden, v1incent, Vagif, vcarpani, Vijay Ravichandran, Vikram Tiwari, Viktor Gal, Vincent, Vishnuvardhan Janapati, Vishwak Srinivasan, Vitor-Alves, wangsiyu, wateryzephyr, WeberXie, WeijieSun, Wen-Heng (Jack) Chung, wenxizhu, Will Battel, William D. Irons, wyzhao, Xiaoming (Jason) Cui, Xiaoquan Kong, Xin, Yasuhiro Matsumoto, ymodak, Yong Tang, Younes Khoudli, Yuan (Terry) Tang, Yuan Lin, Yves-Noel Weweler, Zantares, zhuoryin, zjjott, 卜居, 王振华 (Zhenhua Wang), 黄鑫

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.