本文首先结合HBase过滤器的源码, 讲述HBase过滤器抽象基类Filter中各个函数的作用. 最终给出一个简单的自定义过滤器的案例, 在此基础上分析了Filter中各个方法的执行流程, 读者在理解该案例的基础上可以编写任何个性化的过滤器. 本文涉及的源码基于HBase 1.4.x.

本文首先结合HBase过滤器的源码, 讲述HBase过滤器抽象基类Filter中各个函数的作用. 最终给出一个简单的自定义过滤器的案例, 在此基础上分析了Filter中各个方法的执行流程, 读者在理解该案例的基础上可以编写任何个性化的过滤器. 本文涉及的源码基于HBase 1.4.x.

拜占庭将军问题(The Byzantine Generals Problem)提供了对**分布式共识问题**的一种情景化描述, 由Leslie Lamport等人在1982年首次发表. 本文首先以插图的形式描述拜占庭将军问题, 最后在理解拜占庭将军问题的基础上对现有的分布式共识算法进行分类.

In recent years, thanks to the rapid growth of computing power, deep learning has blossomed. The increase in computing power is largely due to the GPUs. As we all know, the current popular deep learning frameworks such as tensorfow, pytorch, mxnet, etc. all support GPU acceleration. In order to explore the implementation principles behind the deep learning framework, this blog post will attempt to build a simple deep learning framework - Tinyflow. We will build a general automatic differentiation framework in which you can add any custom operator. To keep it simple, Tinyflow only implements the operators necessary for multilayer perceptron (MLP) models (such as `MatMulOp`

, `ReluOp`

, `SoftmaxCrossEntropyOp`

), and of course it supports the addition of any other operators (such as `ConvOp`

). At the bottom, we will use GPUs to accelerate matrix operations. Although compared to the mature deep learning framework, Tinyflow is very simple, but it does have the two core elements necessary for deep learning framework: automatic differentiation and GPU operation acceleration.

**Automatic differentiation** (AD), also called algorithmic differentiation or simply “autodiff” is one of the basic algorithms hidden behind the **deep learning framework** such as tensorflow, pytorch, mxnet, etc. It’s AD technique that allows us to focus on the design of the model structure without paying much attention to the gradient calculations during model training. However, this blog post will focus on the **principle** and **implementation** of AD. Finally, we will implement an AD framework based on **computational graphs** and use it for logistic regression. You could find all the code here.

Softmax regression (SR) (or multinomial logistic regression) is a generalization of logistic regression to the case where we want to handle multiple classes. Same as the blog about LR, this blog will detail the **modeling approach**, **loss function**, **forward and backward propagation** of SR. In the end, I will use python with numpy to implement SR and give the use on data sets iris and mnist. You can find all the code here.

Update your browser to view this website correctly. Update my browser now