Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] master from tensorflow:master #169

Open
wants to merge 1,649 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
1649 commits
Select commit Hold shift + click to select a range
3040885
Decrease Linux CPU wheel limit size to 260M.
tensorflower-gardener Jan 13, 2025
5e78ccd
PR #21375: [ds-fusion] Get While loop analysis with copy fusion
shraiysh Jan 13, 2025
75319e0
Fix wrong index when inserting a copy from host to a call's parameter
tensorflower-gardener Jan 13, 2025
9a8f328
Speed-up DepthwiseInputCopyOp
TocarIP Jan 13, 2025
bad2482
Clarify error messaging for struct size check
jparkerh Jan 13, 2025
ef61a40
Add a utility pass for writing atom programs and main IFRT func to fi…
ICGog Jan 13, 2025
c68bd7c
Add original cp name prefix to the names of the decomposed instructio…
toli-y Jan 13, 2025
441b5cf
Format one constraint for stablehlo.scatter operation.
ZixuanJiang Jan 13, 2025
0e97382
In preparation for the upcoming JAX support for the `StringDType`, th…
tensorflower-gardener Jan 13, 2025
1a9c9cc
Internal change to update visibility.
jimlinntu Jan 13, 2025
6c3ac7b
[PJRT:C] Implement PJRT_AsyncHostToDeviceTransferManager class. Intro…
sizhit2 Jan 13, 2025
2ab46f7
[XLA:TPU] Avoid unnecessary and potentially expensive computation sor…
tensorflower-gardener Jan 13, 2025
db4a60f
Disable failed test.
rtg0795 Jan 13, 2025
a2eab46
Extracts a util function `MakeACopyAndReturnItsPartitionedHlo` from d…
ZixuanJiang Jan 13, 2025
202471e
Move convert_async_collectives_to_sync to collectives directory
frgossen Jan 14, 2025
511cb60
Make dot thunk capable of running without a thread pool.
tensorflower-gardener Jan 14, 2025
8e12d62
[xla:gpu] Move XLA:GPU runtime to xla/backends/gpu
ezhulenev Jan 14, 2025
46ef0e1
Rollback of PR #21375
fhoushmand Jan 14, 2025
6194269
Allow composite op odml.quantize_and_dequantize to be converted to cu…
daverim Jan 14, 2025
750b8be
Fix an overflow issue in TransposePlan
junwhanahn Jan 14, 2025
5d6cde1
Support evaluation in the absence of layouts when possible
frgossen Jan 14, 2025
48a3a27
Create copy if the operands of gather/scatter instructions overlap.
ZixuanJiang Jan 14, 2025
867fddd
[HLO Componentization] Populate hlo/testlib sub-component (Phase II).
sdasgup3 Jan 14, 2025
720a8ec
Moving the logic of making a copy for rhs from `PartitionDot` to `Han…
ZixuanJiang Jan 14, 2025
ecdf410
PR #21104: [NVIDIA GPU] Preserve backend config when folding transpose
Tixxx Jan 14, 2025
752fdc3
Update to match upstream API change (NFC).
jpienaar Jan 14, 2025
67dce50
Automated Code Change
tensorflower-gardener Jan 14, 2025
ad41d6f
Integrate Triton up to [632bfc34](https://github.com/openai/triton/co…
vwbaker Jan 14, 2025
80e5c84
Automated Code Change
tensorflower-gardener Jan 14, 2025
04834ae
Automated Code Change
tensorflower-gardener Jan 14, 2025
231d739
Automated Code Change
tensorflower-gardener Jan 14, 2025
946a302
PR #21223: [nfc] Cleanup build files for expander transforms
shraiysh Jan 14, 2025
0458868
Automated Code Change
tensorflower-gardener Jan 14, 2025
96fe0f8
compat: Update forward compatibility horizon to 2025-01-14
tensorflower-gardener Jan 14, 2025
289075a
Update GraphDef version to 2107.
tensorflower-gardener Jan 14, 2025
5c2e24d
[XLA:GPU] move TransposeFolding after simplifier pipeline
metaflow Jan 14, 2025
88136f4
[XLA:GPU] Fix the unpack dim calculation for I4 rewrite with non majo…
loislo Jan 14, 2025
aeb438b
[XLA:CPU] Remove unused stream_executor host code.
WillFroom Jan 14, 2025
8003fb4
Create TmaDescriptor class. This will be used to pass information ab…
vwbaker Jan 14, 2025
bd43a82
[XLA] do not log warnings from hlo pass
metaflow Jan 14, 2025
ee156c1
[XLA:GPU] Enable vectorization of the indices operand for scatter.
pifon2a Jan 14, 2025
af5275c
Update code links in documentation.
akuegel Jan 14, 2025
fd41705
[XLA:GPU] Enable int4 matmul rewriting with Triton MLIR rewriter by d…
loislo Jan 14, 2025
e7f7cef
Integrate LLVM at llvm/llvm-project@b270525f730b
krasimirgg Jan 14, 2025
9263c01
[XLA:TPU] Disable memory space assignment pass via `exec_time_optimiz…
tensorflower-gardener Jan 14, 2025
c141307
[XLA:GPU] Require packed dot operands to be packed along contracting …
mooskagh Jan 14, 2025
eaa68cb
AHWB must outlive tensor buffer
tensorflower-gardener Jan 14, 2025
251362e
Give meaningful names to HLO modules in `triton_fusion_emitter_int4_d…
loislo Jan 14, 2025
4e74930
Adds InferenceRunnerLiteRt class
tensorflower-gardener Jan 14, 2025
8f888e5
Use `@com_google_googletest//:gtest_main` instead of `tsl/platform:te…
ddunl Jan 14, 2025
7f1cdb4
[XLA:GPU] Create cuda-specific api for the runtime to populate the te…
vwbaker Jan 14, 2025
d65ab24
Move triton codegen to xla/backends/gpu/codegen/triton
akuegel Jan 14, 2025
b13ee8b
Adding vectorization support for atomic_rmw.
Moerafaat Jan 14, 2025
56e196b
Reverts fd41705e0ad7a123a9d01b8be2a3b34b3266493e
loislo Jan 14, 2025
29a59f3
PR #21380: Add F4E2M1FN and F8E8M0FNU types
sergey-kozub Jan 14, 2025
af7f5e8
[HLO Componentization] Populate hlo/testlib sub-component (Phase II).
sdasgup3 Jan 14, 2025
f04ac71
[XLA:TPU] Use `MakeComputationPostOrder()` instead of `MakeComputatio…
tensorflower-gardener Jan 14, 2025
2ed0564
Integrate StableHLO at openxla/stablehlo@b2d36c56
GleasonK Jan 14, 2025
ea89878
Fix a typo in the TFG dialect's GraphFuncOp::getCalledFunction
eunjaekim-0 Jan 14, 2025
2a6c919
[HLO Componentization] Populate hlo/testlib sub-component (Phase II).
sdasgup3 Jan 14, 2025
df078d6
Updates the Evaluator to calculate a maximum memory lower bound (i.e.…
tensorflower-gardener Jan 14, 2025
0a99598
Makes keyword arguments of functions loaded from TF1 SavedModels be t…
wangpengmit Jan 14, 2025
36cbae5
[xla:cpu:benchmarks] Fix benchmarks not running.
penpornk Jan 14, 2025
6536782
[HLO Componentization] Populate hlo/testlib sub-component (Phase II).
sdasgup3 Jan 14, 2025
d4e4516
Change `tsl/platform:test_main` back to alias of `test_main` defined …
ddunl Jan 14, 2025
d93cc4f
[HLO Componentization] Populate hlo/testlib sub-component (Phase II).
sdasgup3 Jan 14, 2025
3595178
When reduce_scoped_memory is enabled, the size of scoped allocations …
tensorflower-gardener Jan 14, 2025
142a83c
Always use int32 dtype for the inputs of dynamic partition. Only int3…
tensorflower-gardener Jan 14, 2025
30e98fc
Update ml_dtypes version in ml_dtypes.cmake.
reedwm Jan 14, 2025
bdd0c54
Delete remnants of PhaseOrderPipeline.
SandSnip3r Jan 14, 2025
5decb75
[PJRT:CPU] Improve handling of input buffers with errors and last exe…
hyeontaek Jan 15, 2025
f6439c5
Deprecate EVENT_TYPE_FLOW in TraceViewer. We only use Flow V2 now, so…
tensorflower-gardener Jan 15, 2025
f0d0ebd
This is the first step in unifying the various StreamExecutor::Alloca…
klucke Jan 15, 2025
f859d5f
Use buffer offset when the mlir module size is greater than 2GB
vamsimanchala Jan 15, 2025
4440f81
Fix igamma rendering.
MarkDaoust Jan 15, 2025
160d4b2
Remove a check for the sharding in `FindPadWithWrapPattern`.
ZixuanJiang Jan 15, 2025
501b030
[xla:cpu] Make sure that NanoRt managed temp is properly aligned
ezhulenev Jan 15, 2025
79d8340
Mark mhlo::add,sub,min as legal for misc types. These are supported b…
LukeBoyer Jan 15, 2025
80c31ed
Go: Update generated wrapper functions for TensorFlow ops.
tensorflower-gardener Jan 15, 2025
0692ce0
CompiledModel: Refactor lock usage
terryheo Jan 15, 2025
919b54b
Integrate LLVM at llvm/llvm-project@19032bfe87fa
tensorflower-gardener Jan 15, 2025
fcfd33b
Add DmaMap and DmaUnmap.
pschuh Jan 15, 2025
112af64
Automated Code Change
tensorflower-gardener Jan 15, 2025
5cc1c9e
Automated Code Change
tensorflower-gardener Jan 15, 2025
a8ddabd
Automated Code Change
tensorflower-gardener Jan 15, 2025
8c5eb30
Automated Code Change
tensorflower-gardener Jan 15, 2025
9b47fef
Migrate tf lite owned parsers from mlir_roundtrip_flags to their dire…
rocketas Jan 15, 2025
7f2d1a6
Automated Code Change
tensorflower-gardener Jan 15, 2025
0127a68
Reverts d42f44f9c6912bd26a0810f004a4cd8527b9d9fd
tensorflower-gardener Jan 15, 2025
b99d4a7
Automated Code Change
tensorflower-gardener Jan 15, 2025
795c3bd
No public description
tensorflower-gardener Jan 15, 2025
b52383b
Propagate op name in the tfrt OpKernelRunner.
SiqiaoWu1993 Jan 15, 2025
4f3df8c
Automated Code Change
tensorflower-gardener Jan 15, 2025
c85ba8f
Fix Thread-safety warning.
tensorflower-gardener Jan 15, 2025
f1d93e0
Update GraphDef version to 2108.
tensorflower-gardener Jan 15, 2025
b64fda8
compat: Update forward compatibility horizon to 2025-01-15
tensorflower-gardener Jan 15, 2025
415db29
Delete file that already got moved to new location.
akuegel Jan 15, 2025
1dc2ac8
[XLA] typo and comment formatting NFC
metaflow Jan 15, 2025
f9e075e
[XLA:GPU] Add `RewritePattern`s for binary elementwise ops in `Simpli…
bchetioui Jan 15, 2025
1fa6af0
Move fusions/triton.* to triton/fusion.*
akuegel Jan 15, 2025
b4e9672
[XLA:GPU] drop unused gpu_emitter in ir_emitter_unnested
metaflow Jan 15, 2025
a604ad2
[XLA] Fix build failure in MacOS
mooskagh Jan 15, 2025
2c473ad
[XLA:GPU] Add missing pass in the Triton CUDA compilation pipeline.
bchetioui Jan 15, 2025
f18b647
PR #21410: [ROCm] Register gfx12xx
ScXfjiang Jan 15, 2025
5ed1871
Move fusion tests to xla/backends/gpu/codegen/emitters/tests/
akuegel Jan 15, 2025
d046cb0
[XLA:GPU] IWYU in hlo_pass_fix.h NFC
metaflow Jan 15, 2025
41b8526
Update the link to the fusion emitter tests.
akuegel Jan 15, 2025
f4b2a92
Move remaining fusion codegen to xla/backends/gpu/codegen/
akuegel Jan 15, 2025
d4f74e9
Actually allocated a temporary buffer before calling NanoRt Executabl…
tensorflower-gardener Jan 15, 2025
ba3491a
[XLA:GPU] Delete GPU IrEmitter::HandleFusion
metaflow Jan 15, 2025
55a19c8
PR #21396: [ROCm] Fix build break due to XNNPACK update and add NCCL_…
mmakevic-amd Jan 15, 2025
6022a0d
PR #21474: [ROCm] Fix gpu_index_test
mmakevic-amd Jan 15, 2025
e4cbfc7
[XLA:GPU] Migrate collective bytes transfered to `BytesTransferred`.
golechwierowicz Jan 15, 2025
a73edc3
[XLA:GPU] Calculate packing dim for s4 dots from stride and shape val…
loislo Jan 15, 2025
124f5fa
[xla] scatter_simplifier: simplify IsSimplifiedScatter function
cota Jan 15, 2025
e8c1fcd
[XLA:GPU] Drop most of GPU elemental IR emitter
metaflow Jan 15, 2025
ce9789a
[XLA:GPU] Make LayoutAssignment aware of Cub Radix Sort custom calls.
thomasjoerg Jan 15, 2025
376a06d
add changelist number to tfrt pjrt impl
jparkerh Jan 15, 2025
0928aab
[XLA:GPU] Use Cub RaddixSort for f16, f32, and f64 sorts in Numpy ord…
thomasjoerg Jan 15, 2025
4b4d846
Move various headers to `xla/tsl`
ddunl Jan 15, 2025
aebfc64
Support expanding ragged all-to-all dims similar to all-to-alls.
tensorflower-gardener Jan 15, 2025
837b770
Update to match upstream API change (NFC).
jpienaar Jan 15, 2025
bb4399d
OpenCL tensor buffer for litert
tensorflower-gardener Jan 15, 2025
c61956c
Remove TFL Interpreter deprecation notice from LiteRT Interpreter.
pak-laura Jan 15, 2025
e3f3385
AllocationRequest.end_time is inclusive. The start and end times of M…
sparc1998 Jan 15, 2025
f07c615
[XLA:Python] Fix scoping of gil_release.
hawkinsp Jan 15, 2025
ec74951
Canonicalize inputs of conditionals into tuples in `ConditionalCanoni…
junwhanahn Jan 15, 2025
25e5c26
Create a generic tsl::Allocator that works in terms of stream_executo…
klucke Jan 15, 2025
96f6c59
Reverts 7869999086e70e24c0d1bda491d80cd28b127866
ishark Jan 15, 2025
b42138d
Implement CopyRawToHost for TfrtCpuClient.
pschuh Jan 15, 2025
27ebded
[xla:cpu:benchmarks] Move benchmarks to the new //xla/backends/cpu fo…
penpornk Jan 15, 2025
854a426
Reverts a7703e73d050fe62fd59ffd4435a8f9249dfc99c
tensorflower-gardener Jan 15, 2025
02f7192
PR #21273: [XLA:GPU] Add support for NCCL ncclCommInitRankScalable API
nvcastet Jan 15, 2025
96f2aeb
[xla:pjrt] Add support for forwarding FFI context to C API client
ezhulenev Jan 15, 2025
1893c10
Reverts ad41d6fe7fda310fff0a94185e47edd525c7c21c
chsigg Jan 15, 2025
d92a503
Add functions for working with dispatch op custom options using the f…
LukeBoyer Jan 15, 2025
06e5b3b
[xla:cpu] Use StringRef::contains in test as mangling rules might cha…
ezhulenev Jan 15, 2025
312fe36
In SPMD partitioner, preprocess the sharding on singleton dimensions …
ZixuanJiang Jan 15, 2025
3bc00d7
No public description
tensorflower-gardener Jan 15, 2025
583b4aa
Make ML Drift's fingerprinting logic into a helper function
tf-marissaw Jan 15, 2025
9df680f
Add field in internal model for storing (non-tensor) buffers that wil…
LukeBoyer Jan 15, 2025
8aef34c
Add HasProperty to HloRunnerInterface and implementations.
nvgrw Jan 16, 2025
bb0f4a4
[xla:collectives] Always use ncclCommInitRankConfig for clique initia…
ezhulenev Jan 16, 2025
4d17128
Fix ASSERT_OK_AND_ASSIGN issue in ClientLibraryTestRunner.
nvgrw Jan 16, 2025
3d0c5fa
Implement CHLO->StableHLO ragged_dot mode 1 decomposition.
ghpvnist Jan 16, 2025
463dacf
Port custom_call_test to derive from `HloTestBase`.
nvgrw Jan 16, 2025
ead209c
Update users of TSL headers and targets to new location in XLA
ddunl Jan 16, 2025
0647b3b
Update to match upstream API change (NFC).
jpienaar Jan 16, 2025
71a71fd
Automated Code Change
tensorflower-gardener Jan 16, 2025
75f6410
Move emitter_loc_op_builder to xla/codegen directory.
akuegel Jan 16, 2025
67d44d1
[xla:gpu] Cleanup AsNcclUniqueIds
ezhulenev Jan 16, 2025
a184edd
compat: Update forward compatibility horizon to 2025-01-16
tensorflower-gardener Jan 16, 2025
d818f1b
Update GraphDef version to 2109.
tensorflower-gardener Jan 16, 2025
c7f9c9a
[XLA:GPU] Enable Triton MLIR int4 -> int8 rewrite
loislo Jan 16, 2025
8b8be4d
[XLA:GPU] Fix MacOS build
mooskagh Jan 16, 2025
bc2cb8f
[XLA] Fix undefined behaviors for missing HloModule schedule.
allanrenucci Jan 16, 2025
fc3b0b4
Avoids out of bounds access on 'begins_are_dynamic' and 'ends_are_dyn…
Jan 16, 2025
7cf6531
Integrate LLVM at llvm/llvm-project@c24ce324d563
krasimirgg Jan 16, 2025
7fc0ca4
PR #21511: Fix bitcast transposes in layout normalization.
jreiffers Jan 16, 2025
4aa6da0
[xla:cpu:benchmarks] Update microbenchmark path in CPU benchmark work…
penpornk Jan 16, 2025
a3c7c07
Support loading unoptimized HLO snapshot with arguments.
Jan 16, 2025
e8b5c9c
[XLA:GPU] Use inline assembly for vectorized AtomicRMWOp for Hopper.
Moerafaat Jan 16, 2025
1be0fd1
[xla] scatter_simplifier: document simple scatter semantics + add exa…
cota Jan 16, 2025
a013822
Reverts 312fe365ee90ca039ba02c25e0de5b124aaad357
tensorflower-gardener Jan 16, 2025
f89cf59
[XLA:GPU] Dumping unoptimized HLO snapshots should not trigger dumpin…
Jan 16, 2025
7f74e23
[XLA:Python] Port pmap_lib.cc to use PyType_FromSpec() to construct a…
hawkinsp Jan 16, 2025
4562e58
[XLA:GPU] remove GpuElementalIrEmitter
metaflow Jan 16, 2025
71dc83c
Add nsz fastmath flag to AddF ops in reducers.
akuegel Jan 16, 2025
c4f826e
[xla:cpu] scatter_benchmark: remove unique_indices=true
cota Jan 16, 2025
f52456d
#sdy support StableHLO from refining Shardy ops with polymorphic shapes
bartchr808 Jan 16, 2025
5062193
Title: NCCL cost model adjustment
tensorflower-gardener Jan 16, 2025
255e99c
[IFRT] Add ifrt.bzl and pjrt_ifrt.bzl for package management.
hyeontaek Jan 16, 2025
e003c6e
[XLA:CollectivePipeliner] Introduce all-gather as a formatting op.
seherellis Jan 16, 2025
b38eed8
Update users of TSL headers and targets to new location in XLA
ddunl Jan 16, 2025
47a216b
[XLA:CPU] Add initial thunk serialization.
tensorflower-gardener Jan 16, 2025
7cd6200
[xla:cpu] Add IfRt benchmark with many kernels and results
ezhulenev Jan 16, 2025
42d44a0
Add ConvertXlaScatterOp pattern to odml::PopulateLegalizeTfPatterns
kevinbchen Jan 16, 2025
02d117c
Add hbm read and write time to Grappler::Costs.
tensorflower-gardener Jan 16, 2025
7b25d41
Fix typo in comment for stablehlo->mhlo converter
ghpvnist Jan 16, 2025
a7109ae
Move GraphImportConfig stringification to import file and delete mlir…
rocketas Jan 16, 2025
ad06605
Port array_elementwise_ops_test to derive from `HloTestBase`.
nvgrw Jan 16, 2025
1a2490a
PR #18838: [NVIDIA GPU] Support multi-operand collective-permute
terryysun Jan 16, 2025
94522ac
Port convolution_test to derive from `HloTestBase`.
nvgrw Jan 16, 2025
9c2454c
[XLA:TPU] Disable optimization passes based on effort flag
tensorflower-gardener Jan 16, 2025
99ae54b
[TSL] Don't truncate thread ids
majnemer Jan 16, 2025
dc9d91e
Rename variables for clarity and add missing imports
ghpvnist Jan 16, 2025
42e87d3
Port convert_test to derive from `HloTestBase`.
nvgrw Jan 16, 2025
851b4ef
cleanup of deprecated test methods
tensorflower-gardener Jan 16, 2025
76e0e49
LiteRT: Update Model API
terryheo Jan 16, 2025
e8f0937
Pass stablehlo-ext-prepare-for-hlo-export : Migrate from MHLO to Stab…
abhigunj Jan 16, 2025
203379b
Add a couple of unit tests for Timespan::ExpandToInclude().
tensorflower-gardener Jan 16, 2025
6527db5
Add basic DCN transfer library.
pschuh Jan 16, 2025
75cf748
[Emitters] Move ir/ and transforms/ under emitters/ directory.
pifon2a Jan 16, 2025
034dfa9
Reverts a7109ae416b6448afa708ec0b7925c7b0daadd81
rocketas Jan 16, 2025
12f5a22
Prepare code for breaking change in Protobuf C++ API.
tensorflower-gardener Jan 17, 2025
0059924
[XLA:SchedulingAnnotations] Support having multiple computations cont…
seherellis Jan 17, 2025
e41ae0a
Reverts 3bc00d7bec9ea7051cebae593cab467feba4ddc9
tensorflower-gardener Jan 17, 2025
4c330eb
Rollback: Move GraphImportConfig stringification to import file and d…
tensorflower-gardener Jan 17, 2025
6183bbf
Automated Code Change
tensorflower-gardener Jan 17, 2025
6b69c3a
Create `xla.bazelrc` in preparation for XLA having a completely indep…
ddunl Jan 17, 2025
c9aeaf2
Introduce shape splitting into MSA.
tensorflower-gardener Jan 17, 2025
3a74f2d
Update GraphDef version to 2110.
tensorflower-gardener Jan 17, 2025
9fe133a
compat: Update forward compatibility horizon to 2025-01-17
tensorflower-gardener Jan 17, 2025
a006f15
[xla:cpu:xnn] Add Dot op support to XNN fusion emitter.
penpornk Jan 17, 2025
725ca07
[XLA:GPU] Unit test to ensure Cub Sort honors XLA's totalorder sort s…
thomasjoerg Jan 17, 2025
1a6487a
[XLA:GPU] Fix comment for `xla_gpu_analytical_latency_estimator_optio…
golechwierowicz Jan 17, 2025
be79d7c
[XLA:GPU] Simplify and refactor dot algorithm tests.
loislo Jan 17, 2025
e9d7359
PR #20274: [ROCm] Emit allocas on function entry in lower_tensors.cc
draganmladjenovic Jan 17, 2025
000dd03
Automated Code Change
tensorflower-gardener Jan 17, 2025
29a4e12
Internal cosmetic change
tensorflower-gardener Jan 17, 2025
631d8ed
Make creation of CompilationProvider depend on DebugOptions
beckerhe Jan 17, 2025
841fb44
[XLA:CPU] Implement deserialization from proto to thunks
tensorflower-gardener Jan 17, 2025
a106f55
PR #21549: Remove rocdl_path dependency from non rocm builds
alekstheod Jan 17, 2025
14974f2
[XLA:GPU][NFC] Add debugging information in case a test break.
bchetioui Jan 17, 2025
1890f83
Move fusions.* and fusion_emitter.* to backends/gpu/codegen directory.
akuegel Jan 17, 2025
45165d7
Automated Code Change
tensorflower-gardener Jan 17, 2025
344a9a1
[xla:gpu] extract atomic_rmw_utils to a separate library
cota Jan 17, 2025
cc3006a
[XLA:GPU][NFC] Optimize `TritonEmitterLongDeviceTest.FusionWithOutput…
bchetioui Jan 17, 2025
887b258
Move Preprocessing of graphdef to graph_constructor to decouple code …
rocketas Jan 17, 2025
4887ecb
[xla:cpu] scatter_benchmark: add SimpleScatterReduceF32_R3
cota Jan 17, 2025
604ed30
[XLA] Cleanup global_data.h references
thcmbs Jan 17, 2025
3c8521b
[xla:gpu] move some transforms to xla/codegen/emitters/transforms
cota Jan 17, 2025
83ccbd9
Fix up Windows libtensorflow artifacts to have the same location/nami…
belitskiy Jan 17, 2025
1ae20f1
Add some clarifying comments for Dockerfiles.
belitskiy Jan 17, 2025
738fce0
Update users of TSL headers and targets to new location in XLA
ddunl Jan 17, 2025
f1fe5e5
Dump CL number as part of the filename of a HLO dump
changm Jan 17, 2025
d81a2fe
Add idle and busy time for TPUs to OpStats.
bmass02 Jan 17, 2025
2373eab
Perform Set key operation first in the Exchange Topology to keep exis…
ishark Jan 17, 2025
019da17
[HLO Componentization] Add deprecation timeline to aliased build targ…
sdasgup3 Jan 17, 2025
0ee083d
[MHLO] Add parity with HLO for bounded dynamism in broadcast_in_dim /…
GleasonK Jan 17, 2025
55efa11
Integrate StableHLO at openxla/stablehlo@c125b328
sdasgup3 Jan 17, 2025
a7c0260
Integrate LLVM at llvm/llvm-project@bf17016a92bc
tensorflower-gardener Jan 17, 2025
b339767
Pass flatten-tuple : Migrate from MHLO to StableHLO
abhigunj Jan 17, 2025
371a3a6
Fix wrong name of the attribute for channel handle
ghpvnist Jan 17, 2025
ccd4edc
Change HostOffloader to mark every DynamicUpdateSlice which operates …
SandSnip3r Jan 18, 2025
65f893d
Fix issue with SparseCore device ids and trace viewer.
bmass02 Jan 18, 2025
b3ea35c
Create a SourceTargetPairs class.
toli-y Jan 18, 2025
90f3b95
[XLA:MSA] Add dynamic-slice to async conversion in msa
fhoushmand Jan 18, 2025
5da3baf
print is_fully_replicated in DebugString
tensorflower-gardener Jan 18, 2025
9d1b4fa
Automated Code Change
tensorflower-gardener Jan 18, 2025
f1a3c9b
Automated Code Change
tensorflower-gardener Jan 18, 2025
118b14b
When calling AppendFeatureValues, reserve capacity for the new total …
tensorflower-gardener Jan 18, 2025
140eb66
Automated Code Change
tensorflower-gardener Jan 18, 2025
ca4b381
compat: Update forward compatibility horizon to 2025-01-18
tensorflower-gardener Jan 18, 2025
3ed87a6
Update GraphDef version to 2111.
tensorflower-gardener Jan 18, 2025
cf1fe10
Automated Code Change
tensorflower-gardener Jan 18, 2025
1ec0e02
Automated Code Change
tensorflower-gardener Jan 18, 2025
d4d42d2
Automated Code Change
tensorflower-gardener Jan 18, 2025
8e8df64
Automated Code Change
tensorflower-gardener Jan 18, 2025
f7c52b9
Automated Code Change
tensorflower-gardener Jan 18, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
The diff you're trying to view is too large. We only load the first 3000 changed files.
151 changes: 68 additions & 83 deletions .bazelrc

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ There are two ways to run TensorFlow unit tests.
bazel by doing as follows:

```bash
export flags="--config=opt -k"
export flags="--config=linux -k"
```

If the tests are to be run on the GPU:
Expand All @@ -259,15 +259,15 @@ There are two ways to run TensorFlow unit tests.
flag.

```bash
export flags="--config=opt --config=cuda -k"
export flags="--config=linux --config=cuda -k"
```

* For TensorFlow versions prior v.2.18.0: Add CUDA paths to
LD_LIBRARY_PATH and add the `cuda` option flag.

```bash
export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH"
export flags="--config=opt --config=cuda -k"
export flags="--config=linux --config=cuda -k"
```

For example, to run all tests under tensorflow/python, do:
Expand Down
2 changes: 1 addition & 1 deletion RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
while preserving the implementation flexibility to change the values of
these constants in the future.)
* Interpreter:
* `tf.lite.Interpreter` gives warning of future deletion and a redirection notice to its new location at `ai_edge_litert.interpreter`. See the [migration guide](https://ai.google.dev/edge/litert/migration) for details.
* `tf.lite.Interpreter` gives deprecation warning redirecting to its new location at `ai_edge_litert.interpreter`, as the API `tf.lite.Interpreter` will be deleted in TF 2.20. See the [migration guide](https://ai.google.dev/edge/litert/migration) for details.

### Known Caveats

Expand Down
19 changes: 19 additions & 0 deletions ci/devinfra/docker/windows/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# NOTE: This Dockerfile is no longer in use.
# It is kept just in case, but it's recommended to use the 2022 version,
# and that is what internal CI uses as well.

# This Dockerfile creates an image that has:
# - the correct MTU setting for networking from inside the container to work.
# - Visual Studio 2022 Build Tools
Expand Down Expand Up @@ -42,6 +46,7 @@ RUN C:\TEMP\vs_community.exe \
--add Microsoft.VisualStudio.Workload.NativeDesktop \
--add Microsoft.VisualStudio.Component.VC.14.39.17.9.x86.64 \
--add Microsoft.VisualStudio.Component.Windows11SDK.22621 \
--add Microsoft.VisualStudio.Component.VC.ATL \
|| IF "%ERRORLEVEL%"=="3010" EXIT 0

SHELL ["powershell.exe", "-ExecutionPolicy", "Bypass", "-Command", \
Expand Down Expand Up @@ -152,4 +157,18 @@ RUN (New-Object Net.WebClient).DownloadFile( \
$env:PATH = [Environment]::GetEnvironmentVariable('PATH', 'Machine') + ';C:\tools\bazel'; \
[Environment]::SetEnvironmentVariable('PATH', $env:PATH, 'Machine');

ENV CLOUDSDK_CORE_DISABLE_PROMPTS 1
RUN (New-Object Net.WebClient).DownloadFile('https://dl.google.com/dl/cloudsdk/channels/rapid/google-cloud-sdk.zip', 'C:\Temp\google-cloud-sdk.zip'); \
Expand-Archive -Path 'C:\Temp\google-cloud-sdk.zip' -DestinationPath $env:ProgramFiles -Verbose:$false
RUN & \"$env:ProgramFiles\\google-cloud-sdk\\install.bat\" --path-update false
RUN $env:Path += \";$env:ProgramFiles\\google-cloud-sdk\\bin\"; \
[Environment]::SetEnvironmentVariable('Path', $env:Path, [EnvironmentVariableTarget]::Machine);
# Re-enable prompts for interactive use.
ENV CLOUDSDK_CORE_DISABLE_PROMPTS=""

# MSYS attempts to use non-cmd versions, which aren't meant for Windows
RUN Add-Content -Path C:\tools\msys64\.bashrc -Value 'alias gcloud=gcloud.cmd'
RUN Add-Content -Path C:\tools\msys64\.bashrc -Value 'alias gsutil=gsutil.cmd'
RUN Add-Content -Path C:\tools\msys64\.bashrc -Value 'alias bq=bq.cmd'

SHELL ["cmd.exe", "/s", "/c"]
207 changes: 207 additions & 0 deletions ci/devinfra/docker/windows2022/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,207 @@
# This Dockerfile creates an image that has:
# - the correct MTU setting for networking from inside the container to work.
# - Visual Studio 2022 Build Tools
# - MSVC 14.39
# - LLVM/Clang 18.1.4
# - MSYS2 + curl, git, patch, vim, unzip, zip
# - Python 3.9 - 3.13
# - Bazelisk 1.22.1
# - JDK 21 (Azul Zulu)

FROM mcr.microsoft.com/windows/servercore:ltsc2022

SHELL ["powershell.exe", "-ExecutionPolicy", "Bypass", "-Command", \
"$ErrorActionPreference='Stop'; $ProgressPreference='SilentlyContinue';$VerbosePreference = 'Continue';"]

# Enable long paths
RUN New-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\FileSystem" -Name "LongPathsEnabled" -Value 1 -PropertyType DWORD -Force

RUN md C:\TEMP
RUN md C:\TMP
ENV TMP "C:/TMP"
ENV TEMP "C:/TEMP"

# Install 7-Zip.
RUN (New-Object Net.WebClient).DownloadFile('https://www.7-zip.org/a/7z2201-x64.msi', '7z.msi'); \
Start-Process msiexec.exe -ArgumentList \"/i 7z.msi /qn /norestart /log C:\\TEMP\\7z_install_log.txt\" -wait; \
Remove-Item .\7z.msi;

# Download the Visual Studio 2022 Installer.
RUN (New-Object Net.WebClient).DownloadFile('https://aka.ms/vs/17/release/vs_community.exe', 'C:\TEMP\vs_community.exe');
# Install Visual Studio 2022 Build Tools + Compiler
SHELL ["cmd", "/S", "/C"]
# Packages, and component versions, can be found here:
# https://learn.microsoft.com/en-us/visualstudio/install/workload-component-id-vs-build-tools
RUN C:\TEMP\vs_community.exe \
--quiet --wait --norestart --nocache \
--add Microsoft.VisualStudio.Component.VC.Tools.x86.x64 \
--add Microsoft.VisualStudio.Workload.NativeDesktop \
--add Microsoft.VisualStudio.Component.VC.14.39.17.9.x86.64 \
--add Microsoft.VisualStudio.Component.Windows11SDK.22621 \
|| IF "%ERRORLEVEL%"=="3010" EXIT 0

SHELL ["powershell.exe", "-ExecutionPolicy", "Bypass", "-Command", \
"$ErrorActionPreference='Stop'; $ProgressPreference='SilentlyContinue'; $VerbosePreference = 'Continue';"]

# Install Clang.
RUN (New-Object Net.WebClient).DownloadFile( \
'https://github.com/llvm/llvm-project/releases/download/llvmorg-18.1.4/LLVM-18.1.4-win64.exe', \
'LLVM.exe'); \
Start-Process -FilePath \"C:\Program Files\7-Zip\7z.exe\" -ArgumentList 'x LLVM.exe -oC:\tools\LLVM' -Wait; \
$env:PATH = [Environment]::GetEnvironmentVariable('PATH', 'Machine') + ';C:\tools\LLVM\bin'; \
[Environment]::SetEnvironmentVariable('PATH', $env:PATH, 'Machine');

# Install MSYS2.
RUN (New-Object Net.WebClient).DownloadFile( \
'https://repo.msys2.org/distrib/x86_64/msys2-base-x86_64-20240727.tar.xz', \
'msys2.tar.xz'); \
Start-Process -FilePath \"C:\Program Files\7-Zip\7z.exe\" -ArgumentList 'x msys2.tar.xz -oC:\TEMP\msys2.tar' -Wait; \
Start-Process -FilePath \"C:\Program Files\7-Zip\7z.exe\" -ArgumentList 'x C:\TEMP\msys2.tar -oC:\tools' -Wait; \
$env:PATH = [Environment]::GetEnvironmentVariable('PATH', 'Machine') + ';C:\tools\msys64;C:\tools\msys64\usr\bin\'; \
[Environment]::SetEnvironmentVariable('PATH', $env:PATH, 'Machine');

# Disable signature checking on pacman because we cannot initialize the keyring.
RUN Add-Content -Path C:\tools\msys64\etc\pacman.d\mirrorlist.mingw32 -Value 'SigLevel = Never'
RUN Add-Content -Path C:\tools\msys64\etc\pacman.d\mirrorlist.mingw64 -Value 'SigLevel = Never'
RUN Add-Content -Path C:\tools\msys64\etc\pacman.d\mirrorlist.msys -Value 'SigLevel = Never'

# Install pacman packages.
RUN C:\tools\msys64\usr\bin\bash.exe -lc \
'pacman --noconfirm -Syy curl git patch vim unzip zip'

# Install multiple Pythons, but only add one of them to PATH.
RUN function Install-Python { \
param( \
[string]$version, \
[int]$prependPath \
) \
$url = ('https://www.python.org/ftp/python/{0}/python-{0}-amd64.exe' -f $version); \
Write-Host ('Downloading {0}...' -f $url); \
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; \
(New-Object Net.WebClient).DownloadFile($url, 'C:\tmp\pyinstall.exe'); \
\
# Without the patch version \
$truncatedVersion = $($version -replace '\.\d+$', ''); \
$installDir = 'C:\Python' + $truncatedVersion; \
Write-Host ('Installing into {0} (PrependPath: {1})...' -f $installDir, $($prependPath -eq 1)); \
$argumentList = ('/quiet InstallAllUsers=1 PrependPath={0} TargetDir={1}' -f $prependPath, $installDir); \
Start-Process -FilePath 'C:\tmp\pyinstall.exe' -ArgumentList $argumentList -Wait; \
\
Write-Host 'Verifying install...'; \
Write-Host \" python --version $version\"; & $installDir\python.exe --version; \
\
Write-Host 'Verifying pip install...'; \
& $installDir\python.exe -m pip --version; \
\
Write-Host 'Updating pip...'; \
& $installDir\python.exe -m pip install --upgrade pip; \
\
Write-Host 'Installing/updating packages...'; \
& $installDir\python.exe -m pip install --upgrade setuptools packaging; \
\
Write-Host 'Removing installation binary...'; \
Remove-Item C:\tmp\pyinstall.exe -Force; \
}; \
Write-Host 'Installing multiple Python versions...'; \
$versions = @( \
@{ version = '3.9.13'; prependPath = 0 }, \
@{ version = '3.10.11'; prependPath = 0 }, \
@{ version = '3.11.9'; prependPath = 0 }, \
@{ version = '3.12.8'; prependPath = 0 }, \
@{ version = '3.13.1'; prependPath = 1 } \
); \
foreach ($v in $versions) { \
Install-Python -version $v.version -prependPath $v.prependPath; \
}; \
Write-Host 'Python installations complete.';

# Add a python3 symlink for the Python in PATH.
# It's not feasible to add one for each version, as on Windows,
# Python uses PATH and the binary's (symlink's) location, to launch itself.
RUN C:\tools\msys64\usr\bin\bash.exe -lc 'ln -s /c/Python3.13/python.exe /usr/bin/python3';

# Install JDK 21.
RUN \
Add-Type -AssemblyName \"System.IO.Compression.FileSystem\"; \
$zulu_pkg = \"zulu21.34.19-ca-jdk21.0.3-win_x64.zip\"; \
$zulu_url = \"https://cdn.azul.com/zulu/bin/${zulu_pkg}\"; \
$zulu_zip = \"c:\\temp\\${zulu_pkg}\"; \
$zulu_extracted_path = \"c:\\temp\\\" + [IO.Path]::GetFileNameWithoutExtension($zulu_zip); \
$zulu_root = \"c:\\openjdk\"; \
(New-Object Net.WebClient).DownloadFile($zulu_url, $zulu_zip); \
[System.IO.Compression.ZipFile]::ExtractToDirectory($zulu_zip, \"c:\\temp\"); \
Move-Item $zulu_extracted_path -Destination $zulu_root; \
Remove-Item $zulu_zip; \
$env:PATH = [Environment]::GetEnvironmentVariable(\"PATH\", \"Machine\") + \";${zulu_root}\\bin\"; \
[Environment]::SetEnvironmentVariable(\"PATH\", $env:PATH, \"Machine\"); \
$env:JAVA_HOME = $zulu_root; \
[Environment]::SetEnvironmentVariable(\"JAVA_HOME\", $env:JAVA_HOME, \"Machine\")

# Point to the LLVM installation.
# The Bazel Windows guide claims it can find LLVM automatically,
# but it likely only works if it's installed somewhere inside C:\Program Files.
ENV BAZEL_LLVM "C:\tools\LLVM"

# These variables may be useful, but so far haven't been. Keeping for posterity.
# ENV CLANG_COMPILER_PATH "C:\tools\llvm\bin\clang.exe"
# ENV CC "C:\tools\llvm\bin\clang.exe"
# ENV BAZEL_COMPILER "C:\tools\llvm\bin\clang.exe"

ENV BAZEL_SH "C:\tools\msys64\usr\bin\bash.exe"
ENV BAZEL_VS "C:\Program Files\Microsoft Visual Studio\2022\BuildTools"
ENV BAZEL_VC "C:\Program Files\Microsoft Visual Studio\2022\Community\VC"

# Environment variables to prevent auto-conversion of Linux-like paths to Windows paths.
# This is necessary as some paths end up invalid/mangled.
ENV MSYS_NO_PATHCONV 1
ENV MSYS2_ARG_CONV_EXCL *

# This should only be necessary if there are multiple, differently-versioned
# MSVC compilers installed, and a particular one should be used.
# To find exact versions available:
# - Navigate to the relevant folder, e.g.
# C:\Program Files\Microsoft Visual Studio\2022
# - Search for the `cl.exe` file: `gci -r -fi cl.exe`
# - The version will be part of the found path, e.g.
# 2022\Community\VC\Tools\MSVC\14.39.33519\bin\Hostx64\x64
# ENV BAZEL_VC_FULL_VERSION 14.39.33519

# Install Bazelisk.
RUN md C:\tools\bazel
RUN (New-Object Net.WebClient).DownloadFile( \
'https://github.com/bazelbuild/bazelisk/releases/download/v1.22.1/bazelisk-windows-amd64.exe', \
'C:\tools\bazel\bazel.exe'); \
$env:PATH = [Environment]::GetEnvironmentVariable('PATH', 'Machine') + ';C:\tools\bazel'; \
[Environment]::SetEnvironmentVariable('PATH', $env:PATH, 'Machine');

# Install gcloud, and add it to PATH
ENV CLOUDSDK_CORE_DISABLE_PROMPTS 1
RUN (New-Object Net.WebClient).DownloadFile('https://dl.google.com/dl/cloudsdk/channels/rapid/google-cloud-sdk.zip', 'C:\Temp\google-cloud-sdk.zip'); \
Expand-Archive -Path 'C:\Temp\google-cloud-sdk.zip' -DestinationPath $env:ProgramFiles -Verbose:$false
RUN & \"$env:ProgramFiles\\google-cloud-sdk\\install.bat\" --path-update false
RUN $env:Path += \";$env:ProgramFiles\\google-cloud-sdk\\bin\"; \
[Environment]::SetEnvironmentVariable('Path', $env:Path, [EnvironmentVariableTarget]::Machine);
# Re-enable prompts for interactive use.
ENV CLOUDSDK_CORE_DISABLE_PROMPTS=""

# MSYS attempts to use non-cmd versions, which aren't meant for Windows
RUN Add-Content -Path C:\tools\msys64\.bashrc -Value 'alias gcloud=gcloud.cmd'
RUN Add-Content -Path C:\tools\msys64\.bashrc -Value 'alias gsutil=gsutil.cmd'
RUN Add-Content -Path C:\tools\msys64\.bashrc -Value 'alias bq=bq.cmd'

# Symlink a directory, to have it pretend be the T:\ drive.
# This drive letter is used by internal CI,
# and part of it is mounted to the container during the container's creation.
#
# While the mount argument (`-v host_path:container_path`) still requires
# `container_path` to be a legitimate C:\ path, in this case, 'C:\drive_t',
# this symlink does allow for the convenience of passing unedited paths
# to `docker exec` commands, e.g., 'T:\path', instead of 'C:\path',
# without having to replace the drive letter with C:\ every time.
# Such a workaround is not required on Linux, since it
# can create arbitrary paths within the container, e.g., '/t'.
# Note: This does not affect/work for `docker cp` commands.
RUN New-Item -ItemType directory -Path C:\drive_t; \
New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Session Manager\DOS Devices' -Name 'T:' -Value '\??\C:\drive_t' -PropertyType String;

CMD ["bash", "-i"]
2 changes: 1 addition & 1 deletion ci/official/any.sh
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
# export TF_ANY_EXTRA_ENV=ci/official/envs/local_rbe
# ./any.sh
# ...
set -euxo pipefail
set -exo pipefail
cd "$(dirname "$0")/../../" # tensorflow/
# Any request that includes "nightly_upload" should just use the
# local multi-cache (public read-only cache + disk cache) instead.
Expand Down
2 changes: 1 addition & 1 deletion ci/official/bisect.sh
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
# export TF_BISECT_BAD=a_failing_commit_sha
# export TF_ANY_TARGETS="quoted list of targets, like on the command line"
# export TF_ANY_MODE=test
set -euxo pipefail
set -exo pipefail
cd "$(dirname "$0")/../../" # tensorflow/
export TFCI="$(echo $TFCI | sed 's/,nightly_upload/,public_cache,disk_cache/')"
git bisect start "$TF_BISECT_BAD" "$TF_BISECT_GOOD"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

# Check and rename wheels with auditwheel. Inserts the platform tags like
# "manylinux_xyz" into the wheel filename.
set -euxo pipefail
set -exo pipefail

for wheel in /tf/pkg/*.whl; do
echo "Checking and renaming $wheel..."
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -euxo pipefail
set -exo pipefail

# Run this from inside the tensorflow github directory.
# Usage: setup_venv_test.sh venv_and_symlink_name "glob pattern for one wheel file"
Expand Down
18 changes: 12 additions & 6 deletions ci/official/containers/ml_build/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
################################################################################
FROM ubuntu:22.04@sha256:58b87898e82351c6cf9cf5b9f3c20257bb9e2dcf33af051e12ce532d7f94e3fe AS devel
ARG BASE_IMAGE=ubuntu:22.04@sha256:58b87898e82351c6cf9cf5b9f3c20257bb9e2dcf33af051e12ce532d7f94e3fe
FROM $BASE_IMAGE AS devel
# See https://docs.docker.com/reference/dockerfile/#understand-how-arg-and-from-interact
# on why we cannot reference BASE_IMAGE again unless we declare it again.
################################################################################

# Install devtoolset build dependencies
Expand All @@ -20,15 +23,15 @@ RUN /build_devtoolset.sh devtoolset-9 /dt9
# Setup Python
COPY setup.python.sh /setup.python.sh
COPY builder.requirements.txt /builder.requirements.txt
RUN /setup.python.sh python3.9 builder.requirements.txt
RUN /setup.python.sh python3.10 builder.requirements.txt
RUN /setup.python.sh python3.11 builder.requirements.txt
RUN /setup.python.sh python3.13 builder.requirements.txt
RUN /setup.python.sh python3.9 /builder.requirements.txt
RUN /setup.python.sh python3.10 /builder.requirements.txt
RUN /setup.python.sh python3.11 /builder.requirements.txt
RUN /setup.python.sh python3.13 /builder.requirements.txt

# Since we are using python3.12 as the default python version, we need to
# install python3.12 last for now.
# TODO(b/376338367): switch to pyenv.
RUN /setup.python.sh python3.12 builder.requirements.txt
RUN /setup.python.sh python3.12 /builder.requirements.txt

# Setup links for TensorFlow to compile.
# Referenced in devel.usertools/*.bazelrc.
Expand All @@ -41,6 +44,9 @@ RUN ln -sf /usr/lib/python3.12 /usr/lib/tf_python
# Make sure clang is on the path
RUN ln -s /usr/lib/llvm-18/bin/clang /usr/bin/clang

# Link the compat driver to the location if available.
RUN if [ -e "/usr/local/cuda/compat/libcuda.so.1" ]; then ln -s /usr/local/cuda/compat/libcuda.so.1 /usr/lib/x86_64-linux-gnu/libcuda.so.1; fi

# Install various tools.
# - bats: bash unit testing framework
# - bazelisk: always use the correct bazel version
Expand Down
2 changes: 1 addition & 1 deletion ci/official/containers/ml_build/setup.python.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ VERSION=$1
REQUIREMENTS=$2

# Install Python packages for this container's version
if [[ ${VERSION} == "python3.13" ]]; then
if [[ ${VERSION} == "python3.13" || ${VERSION} == "python3.12" ]]; then
cat >pythons.txt <<EOF
$VERSION
$VERSION-dev
Expand Down
1 change: 1 addition & 0 deletions ci/official/envs/ci_default
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ TFCI_DOCKER_PULL_ENABLE=
TFCI_DOCKER_REBUILD_ARGS=
TFCI_DOCKER_REBUILD_ENABLE=
TFCI_DOCKER_REBUILD_UPLOAD_ENABLE=
TFCI_FIND_BIN=find
TFCI_GIT_DIR=
TFCI_INDEX_HTML_ENABLE=
TFCI_INSTALLER_WHL_ENABLE=
Expand Down
4 changes: 2 additions & 2 deletions ci/official/envs/linux_x86
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
TFCI_BAZEL_COMMON_ARGS="--repo_env=HERMETIC_PYTHON_VERSION=$TFCI_PYTHON_VERSION --config release_cpu_linux"
TFCI_BAZEL_COMMON_ARGS="--repo_env=HERMETIC_PYTHON_VERSION=$TFCI_PYTHON_VERSION --repo_env=USE_PYWRAP_RULES=True --config release_cpu_linux"
TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX=linux_cpu
TFCI_BUILD_PIP_PACKAGE_ARGS="--repo_env=WHEEL_NAME=tensorflow_cpu"
TFCI_DOCKER_ENABLE=1
Expand All @@ -25,5 +25,5 @@ TFCI_OUTPUT_DIR=build_output
TFCI_WHL_AUDIT_ENABLE=1
TFCI_WHL_AUDIT_PLAT=manylinux2014_x86_64
TFCI_WHL_BAZEL_TEST_ENABLE=1
TFCI_WHL_SIZE_LIMIT=240M
TFCI_WHL_SIZE_LIMIT=260M
TFCI_WHL_SIZE_LIMIT_ENABLE=1
3 changes: 2 additions & 1 deletion ci/official/envs/linux_x86_cuda
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,5 @@ TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX=linux_cuda
TFCI_BUILD_PIP_PACKAGE_ARGS="--repo_env=WHEEL_NAME=tensorflow"
TFCI_DOCKER_ARGS="--gpus all"
TFCI_LIB_SUFFIX="-gpu-linux-x86_64"
TFCI_WHL_SIZE_LIMIT=610M
# TODO: Set back to 610M once the wheel size is fixed.
TFCI_WHL_SIZE_LIMIT=620M
Loading