Get started with machine learning using Python | Opensource.com


From self-driving cars to stock market predictions to online learning, machine learning is used in almost every field that utilizes prediction as a way to improve itself. Due to its practical usage, it is one of the most in-demand skills right now in the job market. Also, getting started with Python and machine learning is easy as there are plenty of online resources and lots of Python machine learning libraries available.

Source: Get started with machine learning using Python | Opensource.com

Did you know that MySQL 5.7 has a native JSON data type?

MySQL introduced a native JSON data type in MySQL 5.7. So like an integer, a char, or a real, there became a way to store an entire JSON document in a column in a table of a database—and this document in a column could be roughly a gigabyte in size! The server would make sure it was a valid JSON document and then save it in a binary format that’s optimized for searching. This new data type has probably been responsible for more upgrades of MySQL than any other feature.
The data type also comes with over 20 functions. These functions will extract key-value pairs from the document, update data, provide metadata about the data, output non-JSON columns in JSON format, and more. And it’s much easier on the psyche than REGEX.

Source: What you need to know about JSON in MySQL | Opensource.com

Well, I guess I really do need to update to MySQL 5.7 and keep track of what’s going on in version 8 development.

From the MySQL 5.7 docs on the JSON data type:

As of MySQL 5.7.8, MySQL supports a native JSON data type that enables efficient access to data in JSON (JavaScript Object Notation) documents. The JSON data type provides these advantages over storing JSON-format strings in a string column:

  • Automatic validation of JSON documents stored in JSON columns. Invalid documents produce an error.
  • Optimized storage format. JSON documents stored in JSON columns are converted to an internal format that permits quick read access to document elements. When the server later must read a JSON value stored in this binary format, the value need not be parsed from a text representation. The binary format is structured to enable the server to look up subobjects or nested values directly by key or array index without reading all values before or after them in the document.

I’ve got a number of project that would benefit from this right now.

New – USASpending.gov on an Amazon RDS Snapshot | AWS Blog

[S]tarting today, the entire public USAspending.gov database is available for anyone to copy via Amazon Relational Database Service (RDS). USAspending.gov data includes data on all spending by the federal government, including contracts, grants, loans, employee salaries, and more. The data is available via a PostgreSQL snapshot, which provides bulk access to the entire USAspending.gov database, and is updated nightly. At this time, the database includes all USAspending.gov for the second quarter of fiscal year 2017, and data going back to the year 2000 will be added over the summer. You can learn more about the database and how to access it on its AWS Public Dataset landing page.

Source: New – USASpending.gov on an Amazon RDS Snapshot | AWS Blog

Kubernetes Services By Example – OpenShift Blog

In a nutshell, Kubernetes services are an abstraction for pods, providing a stable, virtual IP (VIP) address. As pods may come and go, for example in the process of a rolling upgrade, services allow clients to reliably connect to the containers running in the pods, using the VIP. The virtual in VIP means it’s not an actual IP address connected to a network interface but its purpose is purely to forward traffic to one or more pods. Keeping the mapping between the VIP and the pods up-to-date is the job of kub

Source: Kubernetes Services By Example – OpenShift Blog

How to speed up your MySQL queries 300 times | Opensource.com

MySQL has a built-in slow query log. To use it, open the my.cnf file and set the slow_query_log variable to “On.” Set long_query_time to the number of seconds that a query should take to be considered slow, say 0.2. Set slow_query_log_file to the path where you want to save the file. Then run your code and any query above the specified threshold will be added to that file.

Source: How to speed up your MySQL queries 300 times | Opensource.com