Powerful MySQL database visual editor for Windows

MySQL Workbench

MySQL Workbench

  -  41.7 MB  -  Open Source
  • Latest Version

    MySQL Workbench 8.0.38 LATEST

  • Review by

    Daniel Leblanc

  • Operating System

    Windows 10 (64-bit) / Windows 11

  • User Rating

    Click to vote
  • Author / Product

    Oracle / External Link

  • Filename


MySQL Workbench is a unified visual tool for database architects, developers, and DBAs. MySQL Workbench provides data modeling, SQL development, and comprehensive administration tools for server configuration, user administration, backup, and much more. The app is available on Windows, Linux, and macOS.

Features and Highlights

MySQL Workbench enables a DBA, developer, or data architect to visually design, model, generate, and manage databases. It includes everything a data modeler needs for creating complex ER models, forward and reverse engineering, and also delivers key features for performing difficult change management and documentation tasks that normally require much time and effort.

The tool delivers visual tools for creating, executing, and optimizing SQL queries. The SQL Editor provides color syntax highlighting, auto-complete, reuse of SQL snippets, and execution history of SQL. The Database Connections Panel enables developers to easily manage standard database connections, including MySQL Fabric. The Object Browser provides instant access to database schema and objects.

It provides a visual console to easily administer MySQL environments and gain better visibility into databases. Developers and DBAs can use the visual tools for configuring servers, administering users, performing backup and recovery, inspecting audit data, and viewing database health.

Visual Performance Dashboard
It provides a suite of tools to improve the performance of MySQL applications. DBAs can quickly view key performance indicators using the Performance Dashboard. Performance Reports provide easy identification and access to IO hotspots, high-cost SQL statements, and more. Plus, with 1 click, developers can see where to optimize their query with the improved and easy to use Visual Explain Plan.

Database Migration
MySQL Work bench now provides a complete, easy to use solution for migrating Microsoft SQL Server, Microsoft Access, Sybase ASE, PostgreSQL, and other RDBMS tables, objects and data to MySQL. Developers and DBAs can quickly and easily convert existing applications to run on MySQL both on Windows and other platforms. Migration also supports migrating from earlier versions of MySQL to the latest releases.

Note: Requires .NET Framework.

Also Available: Download MySQL Workbench for Mac

  • MySQL Workbench 8.0.38 Screenshots

    The images below have been resized. Click on them to view the screenshots in full size.

    MySQL Workbench 8.0.38 Screenshot 1

What's new in this version:

C API Notes:
- C API applications stalled while receiving results for server side prepared statements

Compilation Notes:
- Upgraded the bundled googletest and googlemock sources to version 1.14.0
- Added a missing dependency on GenError
- It is now possible on Linux systems to build MySQL using a bundled tcmalloc library that is provided with the source by specifying -DWITH_TCMALLOC=BUNDLED. This is supported on Linux only
- Linux aarch64 platform binaries are now built using patchelf --page-size=65536 for compatibility with systems using either 4k or 64k for the page size

Data Dictionary Notes:
- Attempting to upgrade a MyISAM table containing a mix of regular columns and generated columns from MySQL 5.7 to 8.0 or later led to table corruption

- InnoDB: MySQL unexpectedly halted on an UPDATE after an ALTER TABLE operation
- References: This issue is a regression of: Bug #35183686.
- InnoDB: The log index size calculation now accounts for column order changes
- References: This issue is a regression of: Bug #35183686.
- InnoDB: File system operations performed by InnoDB now consistently fsync the parent directory when performing directory altering tasks
- InnoDB: In debug builds, setting the innodb_interpreter_output debug variable would cause the server to unexpectedly halt. This is now a read-only variable
- InnoDB: For tables created with an index on a column that was too wide for the redundant row format (allowed before MySQL 5.7.35), an in-place upgrade silently imported the table but it was not accessible, which interfered with making backups. Now all operations that involve using the invalid index are rejected with ER_INDEX_CORRUPT until the index is dropped. An ER_IB_INDEX_PART_TOO_LONG error is also reported in the error log
- References: See also: Bug #34826861.
- InnoDB: An InnoDB assertion error referencing an invalid column index was triggered when the column index was valid
- InnoDB: With an empty XA transaction, shutting the server down after an XA START would cause the server to halt unexpectedly
- InnoDB: Shutting down the replication applier or binlog applier while processing an empty XA transaction caused the system to unexpectedly halt
- InnoDB: Removed unnecessary heap usage in the Validate_files::check() function.
- Our thanks to Huaxiong Song for the contribution
- InnoDB: If a partition table was read with innodb_parallel_read_threads=1, read performance greatly decreased from any table after 256 reads. InnoDB behaved as if it reached the maximum capacity of parallel read threads despite not using any.
- Our thanks to Ke Yu for the contribution
- InnoDB: The result from a spatial index containing a column with a spatial reference identifier (SRID) attribute was empty. In addition, using FORCE INDEX to force a covering index scan on a spatial index led to an assertion
- InnoDB: Fixed performance issues related to querying the data_lock and data_lock_waits tables when thousands of read-only transactions were present

Replication: If a source contained a stored, generated column populated by a JSON function and binlog_row_image was set to MINIMAL, any subsequent update or deletion on the underlying column failed with the following error:
- Invalid JSON text in argument 1 to function json_extract: 'The document is empty.'
- The replica attempted to re-evaluate the generated column and failed with that error because the underlying column was unavailable. As of this release, stored, generated columns are not re-evaluated when the underlying columns are unavailable
- Group Replication: Removed a memory leak from /xcom/gcs_xcom_networking.cc
- JSON: Added missing checks for error handling to NULLIF(), COALESCE(), and the shift (>>) operator
- References: See also: Bug #31358416.
- MySQL NDB ClusterJ: Running the ClusterJ test suite resulted in an error message saying a number of threads did not exist. That was due to some wrong handling of threads and connections, which was corrected by this patch
- Averages of certain numbers were not always computed correctly

The following files in strings contained incorrect license information:
- mb_wc.h
- ctype-uca.cc
- ctype-ucs2.cc
- ctype-utf8.cc
- dtoa.cc
- strxmov.cc
- strxnmov.cc
- (Bug #36506181)
- In certain unusual cases, the UpdateXML() function did not process all of its arguments correctly
- Explaining a query which used FORCE INDEX on a spatial index containing a column with SRID attributes led to an unplanned exit
- When incrementing the reference count for an expression, underlying expressions within this expression are not looked at. While removing an expression, after decrementing the reference count, even the underlying expressions were examined, which led to unintentional deletion of the underlying expressions. This issue manifested in Item_ref::real_item() as well as in an assert in sql/item.h. We fix this by not looking at the underlying expression unless the current expression contains the only remaining reference
- Under certain conditions, EXPLAIN FORMAT=JSON FOR CONNECTION sometimes led to an unplanned exit
- Under certain conditions, a race condition could result in the amount of RAM used by TABLE_HANDLES increasing to a maximum of 9GB
- Some CREATE USER statements were not handled correctly
- For a SELECT with ORDER BY and LIMIT, the optimizer first chose a full table scan with a very expensive cost, then performed another check and used the perform_order_index type of path, but this was not reflected by the cost in the optimizer plan
- All internal ACL bitmask variables are now explicitly 32 bits (uint32_t)
- It was not possible to add a functional index on FIND_IN_SET()
- Running two concurrent OPTIMIZE TABLE statements on the same table with fulltext indexes and innodb_optimize_fulltext_only enabled sometimes caused the server to exit
- It was possible for a deterministic stored function to return an incorrect result when the function used JOIN ON inside the return statement. If the query needed to be reprepared due to a table metadata caused by, for example, FLUSH TABLES between two executions, the ON clause was sometimes lost
- The PROCESSLIST_INFO column of THREADS was not updated when executing a prepared statement