国产调教无码片在线播放,很黄很色的免费视频,最近中文字幕2019mv http://www.dpkxx.com/en 移動(dòng)應(yīng)用運(yùn)營(yíng)平臺(tái) Thu, 08 Aug 2019 08:50:05 +0000 en-US hourly 1 https://wordpress.org/?v=4.8 http://www.dpkxx.com/wp-content/uploads/2017/06/C512-c.png 未分類 – Cobub http://www.dpkxx.com/en 32 32 Python USES Deep Neural Networks to Identify Siamese and British Short. http://www.dpkxx.com/en/python-uses-deep-neural-networks-to-identify-siamese-and-british-short/ Mon, 05 Feb 2018 03:25:59 +0000 http://www.dpkxx.com/?p=7317 Let's take a couple of pictures and see how the cat is Siam? Which cat is short?
First Siam

Python USES Deep Neural Networks to Identify Siamese and British Short.,首發(fā)于Cobub。

]]>
Let’s take a couple of pictures and see how the cat is Siam? Which cat is short?
First Siam

Second short

Have you ever been able to identify Siam and British short? Probably, it doesn’t seem to work. This is because there are too few materials, and we can see that these two pictures can be extracted from each other with too few short features. What if we Siam short put 100 picture, the short put 100 picture for your reference, and give a Siamese or English is just a photo can be identified is that a cat, if not recognized, also has a 90% can guess may be right. So if you provide 500 pictures of Siamese 500, are you more likely to guess right?
How do we identify Siamese and British short? It is first summarized the characteristics of the two cats such as facial color, eye color, etc., when have a picture to identify short, we will see if facial color, eye color can be characteristic of Siam.
Will computers be able to identify the two cats as well, after learning how to identify Siamese and English short?
So how do computers recognize images? Let’s look at how computers store images.

The image in the computer is a pile of Numbers in order, 1 to 255, which is a black and white picture, but the color varies from three primary colors – red, green and blue.

In this way, a picture is a cuboid in a computer! A cuboid with a depth of 3. Each layer is a number between 1 and 255.
To get a computer to recognize a picture, you have to let the computer know that it wants to recognize the features of the short image. Extracting features from images is the main task of identifying images.
Here is the main character, roll and neural network.(Convolutional Neural Network, CNN).
The simplest convolutional neural network looks like this.

It is divided into input, convolution layer, pooling layer (sampling layer), full connection and output. Each layer compresses the most important identifying information and transmits it to the next layer.
Convolution layer: to help extract features, the deeper (multi-layer) convolutional neural network will extract more specific features, and the more shallow network extraction will be more obvious.
Pooling layer: reduces image resolution and reduces feature mapping.
Full connection: flattening the image feature, treating the image as an array, and using the pixel value as the characteristic of the value in the predicted image.
Convolution layer
The convolution layer extracts the features from the picture, and the image is stored in the computer according to the format we mentioned above (cuboid). First, extract the feature and how to extract it? Use the convolution kernel (weight). Do the following short operation:

You look at the left and right matrices, and the matrix sizes are from 6×6 to 4×4, but the size distribution of the Numbers seems to be consistent. Look at the real picture:

The picture seems to be blurry, but what about the size of the two pictures? It’s in the following way: same padding.

You add a circle of 0 around the matrix of 6×6, and then you have a 6×6 matrix, and why you add a circle of 0 is related to the size of the convolution kernel, the step length and the boundary. Do it yourself.
The above is a demonstration of using a 3X3 matrix on a 6×6 matrix. What does it look like to convolve in a real picture? The diagram below:

A 28x28x10 activation map (activation diagram is the output of the convolutional layer) is obtained by convolution of a 32x32x3 graph with 10 5x5x3 filters.
Pooling layer
Reduce image resolution and reduce feature mapping. How do you reduce it?
Pooling is done alone on each depth dimension, so the depth of the image remains the same. The most common form of the pooling layer is the maximum pooling.
You can see that the image is obviously getting smaller. As shown in figure:

A new graph is obtained by extracting the maximum value of 2×2 on the two-dimensional matrix of each layer of the activation graph. The real effect is as follows:

With the increase of convolution layer and pooling layer, the characteristics of corresponding filter detection are more complicated. As you accumulate, you can detect more and more complex features. There is also a problem of convolution kernel optimization, and multiple training to optimize the convolution kernel.
The following USES apple’s convoluted neural network framework, TuriCreate, to distinguish Siamese and English short. (first of all, I have been working late in win10 to reload the computer more than 3 times. The system should have WLS, and it is convenient to install turicreae under the enterprise version, MAC system and ubuntu system.)
First of all, prepare to train with 50 pictures of Siam, 50 long. The test USES 10 pictures.
Code :(development tool anaconda, python 2.7)

The data is placed in the image directory of the h disk, and I am installing ubuntu in win10, so the h disk is hung in MNT/down.

Test files :(x refers to Siam, y refers to short, so the name is to distinguish the cat type from the test pictures in the code)

test_data[‘label’] = test_data[‘path’].apply(lambda path: ‘xianluo’ if ‘x’ in path else ‘yingduan’)
The first results are as follows:

The accuracy of the training accuracy is 0.75 and the accuracy is 0.5. Well, it seems that the study is too little, and it will take three years to simulate the five years of the college entrance exam, which will increase the number of Siam and English short pictures to 100. I’m looking at the results.

The accuracy of the training was 0.987, the accuracy of the test was 1.0, and the accuracy was 1.0.
See the results of turicreate recognition:

Our actual picture of the cat is :(red is the type of the real cat – in the code, according to the image name, green is the type of cat identified)

You can see that the two are consistent. The cow forces the training data only 200 pictures, can achieve this effect.

Python USES Deep Neural Networks to Identify Siamese and British Short.,首發(fā)于Cobub。

]]>
Taking Tik Tok and Ease Cloud Music as An Example to Discover Different Requirements of the Three Stages of User Retention http://www.dpkxx.com/en/taking-tik-tok-and-ease-cloud-music-as-an-example-to-discover-different-requirements-of-the-three-stages-of-user-retention/ Mon, 27 Nov 2017 01:24:10 +0000 http://www.dpkxx.com/?p=7179 In general, the employee will leave the job within one month and leave the company for half a year. The reasons for the departure will be different for two years or more.
A month out, usually can't adapt to the job or related to the work itself.
The situation of half a year, general and direct superior concerned.
More than 2 years left, basically belong to the recognition company, but the development is limited
In fact, the retention of the product is similar to that of the employee, and the reasons for the short-term retention, interim retention and long-term retention are very different.

Taking Tik Tok and Ease Cloud Music as An Example to Discover Different Requirements of the Three Stages of User Retention,首發(fā)于Cobub。

]]>
In general, the employee will leave the job within one month and leave the company for half a year. The reasons for the departure will be different for two years or more.
A month out, usually can’t adapt to the job or related to the work itself.
The situation of half a year, general and direct superior concerned.
More than 2 years left, basically belong to the recognition company, but the development is limited
In fact, the retention of the product is similar to that of the employee, and the reasons for the short-term retention, interim retention and long-term retention are very different.

01

In the short term, we can understand the retention after the user has a preliminary understanding of the product, which means that after the product has been downloaded, it is not deleted immediately or deleted in the next two days.
New users in the app will enter directly after the download is complete understanding of the product stage, this stage, the product main function interface, if the product itself nothing particularly conspicuous window (which can be understood as liao point), or hooked users at once, the possibility of losing customers is very big, after all, most users only early adopters, loss of this kind of problem, the general intelligence (algorithm), or select the content of the type of recommended products have an advantage, information of the class, the short video, the mall or in a class, live class, can use the height of the hot spot content recommended the use of the hook for most users.
I recently in an attempt to understand Tik Tok, for example, he has a big advantage determines the app short-term retention rates are higher than other similar short video, for three reasons, first: Tik Tok itself attribute and user product positioning determines his music video is more characteristic;Second: the high quality of video is determined by the shooting threshold, high standard and the advantage of video processing.Thirdly, the selection mechanism of the home page makes the selection threshold of the user lower, so we can open the APP to play the short video, and the quality is quite high from the view of my days.(I haven’t found the content of the recommendation algorithm on the front page, but in terms of content quality, there should be a large manual intervention mechanism).

Tik Tok the three characteristic determines the user after open the app, can quickly immersed in a short video in high quality, also is the hook users can quickly, that I feel is better than well quickly, quickly in a short video on the choice of, is a very easy thing to do.You recall a boring afternoon when you wanted to see a good movie to see the tangle and ridicule of the movie process.
For content products with distinctive features such as no shaking, they can start from another Angle.According to the 2016 mobile information industry subdivision report released by the headline, today’s headline entertainment interest accounts for 68.29% of the total platform, accounting for the first place.The second most important social information was 67.29 percent, and the third was the funny category, accounting for 46.56 percent.That is to say, it is only a collection of these three points, so the retention rate in the short term will be extremely high for early adopters.
Of course to a content type products, generally USES a strong visual and interactive forms, that is to say, adopt the user can see understand product focus, the use of the product characteristics, let users can easily to fit in is the rapid formation of the product is not low quality, the product function looks quite good first impressions.
It also helps to understand why a good newbie guide is very important for many new products, which is a quick way for users to quickly learn about the product.Novice guidance may include not only the use of products but also the core highlights of the product (including ideas such as craftsmanship, high quality, etc.).

02

For the medium term, more appropriate content and more comfortable functional use become more important.Any one thing, we can from jing period (or early adopters) gradually transition to the peaceful period, when we are accustomed to most of the product function, the function of the product do more delicate, more convenient, for instance, learn more about the current user, recommendation algorithm is more accurate;For example, it’s easier to operate, and to change the original three-step clicks into one step, much like we use a computer for a long time, many functions we tend to use keyboard shortcuts instead of mouse.For example, provide a more personalized visual solution, app for design style, which means that this is a good step from the product to the use.
Netease cloud music song reviews module, this in itself is not a very need to function, but as a function of the icing on the cake effect is very good, a lot of top comments directly led to the user’s mood, strengthen the appeal of the song itself.

Further forward, providing more interesting content even if it is associated with a weak core function.Netease cloud music “friends” module, I found that I was unconscious spent a lot of time in the above, there is a short interesting video, wacky GIF animation, good music, but also some small celebrity gossip, itself is the module with music there is not a direct relationship, but he from listening to music and the stars, in the form of a circle of friends (of course his recommendation mechanism not the circle of friends, but with a smart and hot recommended among them, the mechanism of cloud music, after all, friends are mostly weak relationship, and slightly different WeChat), as a cloud music modules, this I think is kept as middle can be for reference.

03

Say again for a long time, also is the senior user’s problems, when users accustomed to the content of the product and function, but there will be a “itch” stage, micro letter do again good, with one or two years later, you might think so, however, much more humorous jokes to see, you can often guess some routines, humor also becomes boring.Papi sauce do you still see it, in the words of a lot of net friend, those routines, besides the trill, he now has such problem, trill embarks from the short music video, that is to say a lot of video will in fixed rhythm of the music and the content, it’s decided to many of his video filming and editing routines are the same, I used a lot of depth around users said video repeat rate is too high, often can see a similar form, the novelty is gradually diminishing.
At this stage, generally three Zhang Daqi products offering, the first name is social, the second call user growth system, the third is called continuous operation stimulus (including form varied topics, hot spots, community).
Social good understanding, the reason we don’t give up WeChat, because it has a lot of our social relations, our concern is not WeChat, but the people on the WeChat, just there is no one product can do WeChat such strong relationship chain characteristics.
Live on yy, devoted to the stranger, a short video class well quickly, trill, diminishing in the freshness of the products, can not afford to build micro letter this strong relationship, weak but is still has a strong relationship between mining.
Live streaming or video, as a way of being lonely and boring, we can see some interesting strangers on the screen, and since it’s human, then nature has personality.
Then the be fond of of weak relationship can dig from the content into a personality like, this can be understood as a form similar to that of the AKB48, the difference is that maybe I was like your best work, later to become fans after I like you as a person’s personality and fun, then work is only part of your personality, in the later, you focus on the anchor may not need the content of the very bright eye, you will be more like, even thought he is true.
This is part of the deep mining of the social weakness of content products.
The growth system, in short, is “a few years after the work honors and some privileges and benefits”, which is not done in detail and expands very much.
Continuous operation stimulus, usually divided into two kinds of circumstances, information, media products tend to produce operation of controversial topics, subject content can often surprise, after all is not, subject to choose the good, may point continuously the following reply.
Mall products tend to make holiday, a double tenth, 618, belong to this category, of course, the continuing operations manufacturing festival belongs to the regularity of, also have a regularity, such as drops, worship for no reason to send coupons, give preferential.

04

Summary, although I said is short medium and long term retained, but the spring is not a specific time short, different users at different levels of products use, reflect individual users of short time might not have the same of the medium and long term process.That’s one thing.
2 it is, I said these products function must points successively, not likely parallel development at the same time, some retained function is, after all, three stages are applicable, just to say which stage the best results, the lower the yield, do the most efficient.
As a recent graduate of the college students, you tell him to high quality business top fortune 500 companies, dry, for him, there is no use, it is better to teach him how to improve the success rate of the interview is really.The same is true for retained solution policy usage.
So that’s a little bit of a little thought for retention, hopefully useful.

內(nèi)容轉(zhuǎn)載自公眾號(hào) 油炸果子

Taking Tik Tok and Ease Cloud Music as An Example to Discover Different Requirements of the Three Stages of User Retention,首發(fā)于Cobub。

]]>
Ambari Installation and Custom Service Initial Implementation http://www.dpkxx.com/en/ambari-installation-and-custom-service-initial-implementation/ Mon, 06 Nov 2017 06:36:18 +0000 http://www.dpkxx.com/?p=7119 Ambari installation

1 Ambari profile

Apache Ambari project by developing software is the purpose of the hadoop cluster configuration, monitoring and management, in order to make the management more simple hadoop.Ambari also provides a RESTful interface implementation based on itself of intuitive, easy-to-use web management interface.
Ambari allows system administrators to the following:
1. Provide installation management hadoop cluster;
2. Monitor a hadoop cluster;
3. The extension ambari custom service management functions.

Ambari Installation and Custom Service Initial Implementation,首發(fā)于Cobub。

]]>
Ambari installation

1 Ambari profile

Apache Ambari project by developing software is the purpose of the hadoop cluster configuration, monitoring and management, in order to make the management more simple hadoop.Ambari also provides a RESTful interface implementation based on itself of intuitive, easy-to-use web management interface.
Ambari allows system administrators to the following:
1. Provide installation management hadoop cluster;
2. Monitor a hadoop cluster;
3. The extension ambari custom service management functions.

2 Basic conditions needed for cluster

2.1 The demand of the operating system

? Red Hat Enterprise Linux (RHEL) version 5. X or 6. X (64);
? CentOS v5. X, 6. X (64) or 7. X.
? Oracle Linux version 5. X or 6. X (64);
Selection of this document is the CentOS version 6.5 (64);

2.2 System based on software requirements

On each host to install the following software:
(1) yum and RPM (RHEL/CentOS/Oracle Linux);
(2)zypper(SLES);
(3)scp,curl,wget;

2.3
The demand of the JDK

Oracle JDK 1.7.0 _79 64 – bit (default)
Its 7 64 – bit (SLES does not support)

3 Before installing the software prerequisites

3.1 Ambari and the conditions needed for monitoring software

Ambari before installation, in order to guarantee ambari services and the normal operation of the various monitoring service, according to different operating systems, the need to determine some already installed software version, software version must conform to the requirements listed below.I.e., if the existing system has the following software, version must be exactly the same as that of the versions listed below, if not the installation program will be installed on its own.
Software configuration precedent chart 3-1 table

3.2 Ambari and HDP version compatibility

Due to a software version upgrade, compatibility between versions due to the version may cause some problems.
Table 3-2 version compatibility

4 Install examples

In this paper, the choice of system and the software version, shown in the table below:
Table 4-1 system and software version

4.1 Installation Ambari before the operation of the system

4.4.1 configuration host name
Ambari cluster configuration information is through the fully qualified hostname to determine the cluster machine information, so you have to make sure it is the host name.
4.1.2 Configure cluster information
Hosts file on each machine to do mapping configuration, command is as follows:
# vi /etc/hosts
Then add the following content:
Table 4-2 IP mapping information table

4.1.3 Configure SSH password free exchange
First of all, the primary node and other nodes are executing the following command, to ensure that each machine can produce the public key.

Then enter all the way. Then each node’s public key to form a new authorized_keys file, then distributed to each node. Thus, completed each node from the login operation.
4.1.4 Configure NTP time synchronization
In the first place on the primary node to do the following:
(1) install the NTP time server:
#yum install ntp
(2) modify the NTPD configuration file
(3) open the time synchronization server
#sevrice ntpd start
(4) in each from the other nodes to do the same operation, thus the NTP synchronization is complete
4.1.5 Closed selinux
Permanently closed SELinux
# vi /etc/selinux/config
Change the SELINUX = enforcing to SELINUX = disabled
Restart to take effect, restart command as follows:
# reboot
4.1.6 Close iptables firewall
Permanent closure (need to restart)
# chkconfig iptables off
(need to restart the firewall temporarily shut down the firewall services)
service iptables stop
Check the firewall status
# chkconfig –list|grep iptables
Note: other services under Linux can use the above command to perform opening and closing operation
Restart to take effect, restart command as follows:
# reboot

4.2 Create yum local source

First check whether the master node HTTPD installation server, the command is:
rpm -qa |grep httd
If not, the installation, the command is:
#yum install httpd
Start the HTTPD
#service httpd start
chkconfig httpd on
All the files inside the folder and subfolders granted the same rights, the command is:
chmod –R ugo+rX /var/www/html
Open the network
vim /etc/sysconfig/network-script/ifcfg-eth0
Modified to onboot = yes
After successful installation, Apache working directory in/var/WWW/HTML by default.
Configuration:
Check whether the port is occupied, Apache HTTP service using port 80
[root@master ~]$ netstat -nltp | grep 80
If there is a footprint, installed after the need to modify the Apache HTTP service port number:
[root@ master ~]$ vi /etc/httpd/conf/httpd.conf
Modify the listener port, Listen to other port 80.
To download the installation files in the/etc/WWW/HTML, then start
[root@ master ~]$ service httpd start
Can check my http://master in the browser to see some of the Apache server page information, said started successfully.

5 The preparation for installation of completely offline Ambari

The difference between offline and online installation yum use different warehouse location, namely the remote installation package in the warehouse a share in local resources such as copies, and then create these resources in the yum warehouse package folder local repository package, can be installed in accordance with the online way to go.But needed to be resolved offline installation Ambari RPM package dependency problem, the first to make sure that postgresql8.4.3 has been installed, or a local postgresql8.4.3 warehouse.

5.1 Prerequisites

Ambari offline installation, you need to use yum, if it is a new installation of operating system, may lack a lot of necessary conditions, the following form according to the order of the once upon a time in the future, in turn, has achieved certain conditions, if can skip those conditions.
Because of complexity of the software operating system itself, such as the installation of tooltips have other required software or existing software upgrades, according to clew to solve it.

5.2 Establish a local repository

HTTP service installed on a machine within the cluster, and then will provide the tar packages or put the RPM package on the machine/var/WWW/HTML directory can be the default directory (Apache) under decompression, the best in this directory to create a new directory, all ambari tar packages and HDP and HDPUTIL is placed in it and extract the tar package, if the machine does not have to manually install PostgreSQL, will provide the software packages together into the local repository.

5.3 Set yum is not check GPG key

Tested off-line yum install Hadoop cluster, it is GPG key check to install software and lead to errors, can through the closed system at this time of yum GPG check to avoid mistakes
# vi /etc/yum.conf
Set gpgcheck attribute value is 0
gpgcheck=0

5.4 Installation ambari services

# yum –install ambari-server

5.5 Ambari Settings

# ambari-server setup
Will appear after the operation whether enter ambari – server daemon, choose the JDK, configuration information such as database, can undertake choosing according to the requirements of the system itself.
When there is a “Ambari Server setup completed successfully”, explain Ambari – Server configuration is successful.To be sure, the database of installation option is a PostgreSQL database, in which the user and the database is the default in advance good;If choose the MySQL database, the need before installing the Ambari – server build user, giving permission, built a database, and so on.
Then start ambari – server, according to the need to install the hadoop ecosystem services.

The custom service service

1 Ambari custom extension service

From the first part, the ambari for secondary development function, the main job is to research the components such as integrated into the ambari, monitoring and management. This paper is to integrate redis.
First, because the service is belonging to the stack, so it decided to customize a service belongs to which stack., again because you have installed HDP2.5.0 stack, so this paper will set the service to place under the stack of HDP2.5.0. New service called: redis – service, including structure diagram as shown in the figure below:

Including configurate the XML file to configure this module call main installation is complete package service main q control in the life cycle of python file, metainfo. The XML file is defined mainly asked some attribute of the service, the metrics. The json and widgets. Json controls the service interface diagram shows.
The metainfo. XML instance as follows:



 2.0
 
 
 REDIS-SERVICE
 Reids
 My Service
 1.0
 
 
 MASTER
 Master
 MASTER
redis
 1
 
 
 PYTHON
 5000
 
 
 
 SALVE
 Slave
 SLAVE
 1+
 
 
 PYTHON
 5000
 
 
 
 
 
 any
 
 
 
 

Second, we need to create a Service life cycle control script master. Py and slave py.Here need to ensure that the script path and in the previous step metainfo. In XML configuration path is the same.The two Python script is used to control the Master and Slave module of life cycle.The meaning of the script function as well as its name: install is install call interface;Start, stop, respectively is start-stop call;The Status is regularly check the state of the component invocation.The master. Py and slave. Py template for:
Master.py

class Master(Script):
 def install(self, env):
   print "Install Redis Master"
 def configure(self, env):
 print "Configure Redis Master"
   def start(self, env):
 print "Start Redis Master"
   def stop(self, env):
   print "Stop Redis Master"
   def status(self, env): 
 print "Status..."
if __name__ == "__main__":
  Master().execute()

Slave.py

class Slave(Script):
 def install(self, env):
 print "Install Redis Slave"
 def configure(self, env):
 print "Configure Redis Slave"
 def start(self, env):
 print "Start Redis Slave"
 def stop(self, env):
 print "Stop Redis Slave"
 def status(self, env): 
 print "Status..."
if __name__ == "__main__":
 Slave().execute()

Again, the redis RPM installation file into the HDP installation package/var/WWW/HTML/ambari/HDP/centos6 / directory.
Again, restart ambari – server, because ambari server only when the restart will read Service and Stack configuration.The command line: ambari – server restart.
Finally, the login Ambari GUI, click on the lower left corner of the Action, select Add Service.The diagram below:

At this point you can see in the list of installation service Redis service. Then check whether the service installation is successful.

2 Ambari implement custom extensions service interface display

Service in the first quarter of the second chapter mentioned metircs. Custom json and widget. The json, the widget is a Ambari figure controls appear in Web Metrics, it will according to the value of the Metrics, to make a simple aggregation operation, finally presented in figure control.Widget is Ambari further enhance the ease of use, and can be configured.Widget is displayed AMS Metrics collected properties.
Then last here, including the metrics. Json template for:

{
  "REDIS-MASTER": {
    "Component": [
      {
        "type": "ganglia",
        "metrics": {
          "default": {
            "metrics/total_connections_received": {
              "metric": "total_connections_received",
              "pointInTime": true,
              "temporal": true
             },
           "metrics/total_commands_processed": {
             "metric": "total_commands_processed",
             "pointInTime": true,
             "temporal": true
             },
           "metrics/used_cpu_sys": {
              "metric": "used_cpu_sys",
              "pointInTime": true,
              "temporal": true
            },
           "metrics/used_cpu_sys_children": {
             "metric": "used_cpu_sys_children",
             "pointInTime": true,
             "temporal": true
            }
            }
            }
      }
      ]
      }
      }

widget.json:

{
  "layouts": [
    {
      "layout_name": "default_redis_dashboard",
      "display_name": "Standard REDIS Dashboard",
      "section_name": "REDIS_SUMMARY",
      "widgetLayoutInfo": [
        {
         "widget_name": "Redis info",
         "description": "Redis info",
         "widget_type": "GRAPH",
         "is_visible": true,
      "metrics": [
         { 
          "name": "total_connections_received",
          "metric_path": "metrics/total_connections_received",
          "service_name": "REDIS",
          "component_name": "REDIS-MASTER"
         } 
         ],
      "values": [
         {
          "name": "total_connections_received",
          "value": "${total_connections_received}"
         }
         ],
      "properties": {
        "graph_type": "LINE",
        "time_range": "1"
       }
       }
       }

At this point, restart ambari – service, command is as follows:

ambari-server restart

3 Data acquisition and sending

Using shell script to run redis information data acquisition and one-time send to the metrics of the collector, the script is as follows:

#!/bin/sh
url=http://$1:6188/ws/v1/timeline/metrics
while [ 1 ]
do
total_connections_received=$(redis-cli info |grep total_connections_received:| awk -F ':' '{print $2}')
total_commands_processed=$(redis-cli info |grep total_commands_processed:| awk -F ':' '{print $2}')
millon_time=$(( $(date +%s%N) / 1000000 ))
json="{
 \"metrics\": [
 {
 \"metricname\": \"total_connections_received\",
 \"appid\": \"redis\",
 \"hostname\": \"localhost\",
 \"timestamp\": ${millon_time},
 \"starttime\": ${millon_time},
 \"metrics\": {
 \"${millon_time}\": ${total_connections_received}
 }
 },
 {
 \"metricname\": \"total_commands_processed\",
 \"appid\": \"redis\",
 \"hostname\": \"localhost\",
 \"timestamp\": ${millon_time},
 \"starttime\": ${millon_time},
 \"metrics\": {
 \"${millon_time}\": ${total_commands_processed}
 }
}
 ]
}"
echo $json | tee -a /root/my_metric.log
curl -i -X POST -H "Content-Type: application/json" -d "${json}" ${url}
sleep 3
done

Run the following commands (note here is that parameters 1 is the Metrics of the Collector machine, is not a Ambari Server machine) :
./metric_sender.sh ambari_collector_host total_connections_received redis
Process if no accident, wait for 2-4 minutes to interface with data display. Through the above operation, can be implemented to include ambari not in monitoring management software to monitor.

Ambari Installation and Custom Service Initial Implementation,首發(fā)于Cobub。

]]>
The Implementation of Cobub’s Codeless Capture Technology http://www.dpkxx.com/en/the-implementation-of-cobubs-codeless-capture-technology/ Wed, 01 Nov 2017 08:36:28 +0000 http://www.dpkxx.com/?p=7091 With the advent of the era of big data, data mining has become more and more important.Front end point buried as a more mature data access method is widely used.Currently buried point is divided into two ways, and will come with no points.Sets a buried point is easy to understand, is called the SDK API, in the code inserts buried point related code, user behavior acquisition.Because we are in development projects, buried point are manually, every business needs change to buried point everywhere, and no burial code, which does not need to be manually inserted into the code, just prior to related configuration, the SDK automatically collect user behavior, avoided because of the change of demand, and buried the great degree error causes such as to bury some heavy and complicated work.This paper mainly introduces the technical implementation of codeless capture technology.

The Implementation of Cobub’s Codeless Capture Technology,首發(fā)于Cobub。

]]>
With the advent of the era of big data, data mining has become more and more important.Front end point buried as a more mature data access method is widely used.Currently buried point is divided into two ways, and will come with no points.Sets a buried point is easy to understand, is called the SDK API, in the code inserts buried point related code, user behavior acquisition.Because we are in development projects, buried point are manually, every business needs change to buried point everywhere, and no burial code, which does not need to be manually inserted into the code, just prior to related configuration, the SDK automatically collect user behavior, avoided because of the change of demand, and buried the great degree error causes such as to bury some heavy and complicated work.This paper mainly introduces the technical implementation of codeless capture technology.

The Implementation Process of Codeless Capture


1. Visual view circle, the page will appear floating circle, drag the circle to want to configure the control of events, play box will pop up input event.
2. In the previous step play box type in the name of the event the custom, the name will and view viewPath bind.ViewPath is the unique identifier view, in the heart of the below in detail.
3. The user clicks on the controls, whether controls binding events, such as binding events to upload.

The Technical Points in the Process

The Implementation of Visual View Selection

Custom UIWindow subclass, as suspended small circle, add UIPanGestureRecognizer gestures, according to the displacement of hand gestures, to set the displacement of suspension box.Gestures stops for floating window center coordinates.
Traverse the child views on the main window, find the window containing the suspension center and can respond to user interaction of the layer view, is the user can select the view.
Refer to iOS control message transmission chain, a core method.UIView hitTest: (CGPoint) point withEvent: UIEvent *) event.This API automatically traverse child views, find a point of view, the event nil.Since the event parameters is nil, eventually found the view is not necessarily can respond to user gestures view, if you can’t response to traverse the father view, until you find the view can respond to user behavior.

Selecting the View Binding Events

View a unique identifier viewPath generated, the above steps have been given a select the view.How do you determine the view viewPath is key.ViewPath need the whole application only, just can be the difference between different events.Because it is no, so can attribute to analysis from the view itself.We can put the App’s understanding of the concept of spanning tree view structure, root node of the tree is UIWindow, one of the branches of the tree is composed of UIViewController and UIView, leaf nodes are UIView.Then from the root node to leaf node path can be seen as the only one.Also is the viewPath view.Achieve logic, introduces below viewPath consists of two parts, the first part is a node path, another part of the matching node index.Node path is made up of each node Class patchwork, node index, index in a node in the parent node, such as child views in the parent view’s subviews array subscript.This is the logic diagram of the traverse the nodes.

Index of computing nodes, this step, there is a special view to note, reusable view index is associated with a data source, such as UITableViewCell, such a view of the index cannot use the parent view’s subviews subscript instead, you should use subscript representative of data sources, such as cell indexPath. Section: indexPath. Row.Here is a simple view and reusable view viewPath example.TestViewController UIView – UIButton & 0-0-0 and TableViewController UITableView – UITableViewCell & 0-0-1-0.
How to detect the user triggers the binding event ID view is also the point, the use of core technology is the Method the runtime Swizzle.To introduce the following according to different types of controls, how to hook the corresponding method.
1. The UIControl type control hook – (void) sendAction (SEL) to: (id) target forEvent: UIEvent *) event
2. UIScrollView, UITextView UITableView, UICollectionView types of controls, hook – (void) setDelegate: (id first) the delegate method, and then the hooks to collect event agent method, such as textViewDidBeginEditing, tableview: (UITableView *) tableview didSelectRowAtIndexPath: (indexPath NSIndexPath *), etc.
3. View the hooks with gesture events – (void) addGestureRecognizer method, and add new objects to gesture in the method to realize the target and action, – (void) addTarget (id) target action (SEL) action.

Conclusion

Codeless capture technology is the analysis of the above points, first of all, through visual select the view to need binding events, and generate a unique identifier viewPath, through the hook system control method, get the user’s view, the views generated viewPath list with local events, events will upload viewPath corresponding than success.

The Implementation of Cobub’s Codeless Capture Technology,首發(fā)于Cobub

]]>
Three Years Running, a Million Micro-service Data Analysis Framework http://www.dpkxx.com/en/three_years_running_a-million_micro-service_data_analysis_framework/ Wed, 25 Oct 2017 03:49:22 +0000 http://www.dpkxx.com/?p=7037 Language knowledge used in architecture:

In recent years, data analysis has developed rapidly and we have also made a micro data analysis tool.The product has been successfully operated for three years, fulfilling the daily life of millions of enterprises.The product structure is very simple, with the simplest language in the world PHP, the most common database mysql, the server can choose apache or choose nginx, all your own preferences.

1. Microservice architecture diagram:

Three Years Running, a Million Micro-service Data Analysis Framework,首發(fā)于Cobub。

]]>
Language knowledge used in architecture:

In recent years, data analysis has developed rapidly and we have also made a micro data analysis tool.The product has been successfully operated for three years, fulfilling the daily life of millions of enterprises.The product structure is very simple, with the simplest language in the world PHP, the most common database mysql, the server can choose apache or choose nginx, all your own preferences.

1. Microservice architecture diagram:


Whole flow chart:
(1) The SDK uploads data to the server, and if redis is installed, the data will be advanced to redis and then periodically extract the data to the DB server. Redis can greatly improve parallel data processing capabilities.
(2) The database collects raw data, and the stored procedure calculates the data according to different dimensions according to different dimensions, and the data summary table.
(3) Front desk report presentation, real-time report, hour report and day report data display. It’s better to write and separate.

2. Functional framework


The functional architecture includes functions, roles, and permissions. The function is enterprise service, the user USES each function, is the enterprise every service. The role is the user action category, the function and the role’s correspondence and permissions. Understand the status of the system architecture and start with the functional architecture.

3. Application framework

The application architecture includes existing architecture diagrams, web application status, and interface architecture. Among them, the interface is the key to the application layer, which is the interaction between programs.
The main interface includes clientdata usinglog event and errorlog, etc.
The SDK sends data to the background periodically through the interface.
The application architecture lists the end-to-end invocation relationships.

4. Data design

Two databases, about one hundred tables. The design of the database relies on business data, classifies business data, and results in an E_R diagram of data design. The data design is completed and the final database design comes out. As long as early design of the database, it can be easy to scale, easy to split. Statistical classes are mainly divided into statistical dimensions, which are user, device, error information, etc.
(1) Data handling capacity
The number of live millions, the number of launches is about two million, and the number of events and page views is at least 300 to five million, with an average of 500,000 data per hour. During operation, the customer data volume is concentrated in the morning and evening peak. According to the special situation of customers, some tasks are arranged in the leisure time, such as daily tasks, weekly tasks, monthly tasks, etc. Good hardware configuration is a good helper for data processing, and larger memory faster drives can definitely make data flow fast.
(2) Data cleaning and read-write separation
A large amount of raw data is entered into the database, which is then processed into garbage data. When all report data is counted and writes to the various dimension tables, the data needs to be removed periodically.
The front desk report shows that data is best separated from the storage analysis database.

5. Physical Schema

A microservice’s physical architecture requires very little machinery, and a machine can run. Analysis statistics is mainly about data processing ability, the database server needs two, and the web side needs one. Many years of operation result and database processing capacity are the biggest bottleneck of statistical analysis.

6. The direction of continual optimization

(1) Data read and write separation, data cleaning.
(2) Concurrent volume.

7. Customers

Customer’s most important data:
The most important thing for each customer is user table, user’s new status, user activity, user retention. Different customers require different user requirements to determine whether the user is using the machine, and the user has a mapping relationship with the device number and user ID (user number).
Event data is also important, relational conversion rate.
Page access is just as important as events.
Error data can detect bugs in the application.
Different customers, different usage scenarios have different requirements for indicators.

Three Years Running, a Million Micro-service Data Analysis Framework,首發(fā)于Cobub

]]>
Actual Combat of Apache NiFi Processor http://www.dpkxx.com/en/actual-combat-of-apache-nifi-processor/ Mon, 16 Oct 2017 03:27:49 +0000 http://www.dpkxx.com/?p=7004 1 Introduction

What is Apache NiFi? NiFi's website explains: "an easy-to-use, powerful, reliable data processing and distribution system." Popular, namely the Apache NiFi is an easy to use, powerful and reliable data processing and distribution system, its designed for the data stream, it supports highly configurable indicator diagram of data routing, transformation and mediation logic system.
To NiFi can describe more clearly, through NiFi architecture to do a brief introduction to below, as shown in the figure below.

Actual Combat of Apache NiFi Processor,首發(fā)于Cobub。

]]>
1 Introduction

What is Apache NiFi? NiFi’s website explains: “an easy-to-use, powerful, reliable data processing and distribution system.” Popular, namely the Apache NiFi is an easy to use, powerful and reliable data processing and distribution system, its designed for the data stream, it supports highly configurable indicator diagram of data routing, transformation and mediation logic system.
To NiFi can describe more clearly, through NiFi architecture to do a brief introduction to below, as shown in the figure below.

According to the official website of the individual components, do the translation:
? WebServer:The goal is to provide an HTTP command and control API.
? Flow Controller:This is the core of the operation, with Processor for processing unit, the expansion of the provided for running threads, extension in receiving the resource scheduling and management.
? Extensions:In other document describes various types of NiFi extension, Extensions is the key to expand in the JVM operation and execution.
? FlowFile Repository:FlowFile library is NiFi track record is active in the flow of a given current state of the stream file, its implementation is pluggable, located on the specified disk partitions the default method is a durable log before writing.
? Content Repository:The Content library’s role is the location of the actual Content bytes of a given stream file, and its implementation is pluggable. The default method is a relatively simple mechanism to store blocks of data in a file system.
? Provenance Repository:The Provenance repository is where all the source data is stored and supports pluggable. The default implementation is to use one or more physical disk volumes, which are indexed and searchable in each location event data.

2 The Introduction of NiFi Processer

Said in the previous section so much, mainly through NiFi NiFi architecture diagram introduced the basic concept, the concept is a Flow Controller is the core of NiFi, then the Flow Controller specific what is it? Flow Controller plays file communication processor roles, maintain a connection and management of multiple processors each Processer, Processer is actual processing unit. So, let us through NiFi UI see NiFi Processor contains?

By above knowable, the Processor contains various types of components, such as amazon, attributes, hadoop, etc., can be easily identified by the prefix, such as Get, Fetch beginning on behalf of the acquisition, such as getFile, getFTP, FetchHDFS, execute on behalf of the execution, such as the ExecuteSQL, ExecuteProcess, ExecuteFlumeSink can be easier to know its easy use.

3 Actual Combat of NiFi Processer

Having said so much, introducing NiFi’s architecture and Processor, what about the actual combat? Then, this article takes the author’s actual demand as an example, carries on the actual combat of the Processor. The requirements are as follows: Select a data processing scheduling tool to implement custom scheduling for server scripts. The script of the server involves scheduling of environment variables, oracle databases, and Hadoop ecosystem components. When the server script scheduler is completed, it returns the script run state and provides the failed re-run interface.
In order to achieve the requirements, ever scheduling scheduling tools, such as Apache Oozie, Azkaban, Pentaho, finally compares the various pros and cons of trying to use Apache NiFi as a try, by looking at NiFi Processor API, can better support Processor for ExecuteProcess remote operation. Below are in demand for actual combat.

3.1 Add and configure the Processor

1. Add and configure the Processor

2. Right-click on ExecuteProcess and select Configure Processor to Configure the Properties TAB. Each of these configuration options provides related instructions, as shown below.

As the figure above shows, there is a need to explain the options.
? Command: sh.
? Command Arguments:-c;ssh user@ip sh js/job/job_hourly.sh `date
? Batch Duration: Don’t set. // we need to schedule regularly, rather than at intervals.
? Redirect Error Stream: Don’t set.
? Argument Delimiter: ; / / to; Split the parameters.

3.2 Processor Dispatch

NiFi support three scheduling policies, including Time Driven (drive), CRON Driven (CRON) and Event Driven (Event Driven, not optional), according to the actual demand we choose CRON Driven, personal understanding of CRON is the application of Crontab, the parameters of the CRON meanings respectively: second, minute,, day, month, week, years, when the need to cooperate with *,? And L perform together (* representative works on the value of the field; ? Representative for the specified field is not specified value; L on behalf of the long plastic). For example, ” 0 0 13 * *?”The representative wants to have a dispatch at 1 PM every day. Therefore, the scheduling configuration of parameters is based on our requirements. This is shown in the figure below.

3.3 Operation State Monitoring

NiFi is available for developer scheduling through Rest apis, where we monitor the running state with the Processor API (state parameter acquisition, Processor startup and stop).
1. Operation status monitoring parameters:
Command is as follows: the curl ‘http://IP/nifi-api/processors/processorsID’, get the following results can be interpreted through json parser, and access to state.

2. Start and stop of Processor:
NiFi’s Processor startup stops with its Put method. The most effective action of Put is to change its operation state. There are three states of the NiFi Process, namely Running, Stopped and Disabled.
Then we’ll start and stop the two command Rest apis that are executed in the script.
? Start the command (using the Rest API’s Put method) :
curl -i -X PUT -H ‘Content-Type:application/json’ -d ‘
{
“revision”: {
“clientId”: “586ec1d7-015d-1000-6459-28251212434e”,
“version”:17},
“component”: {
“id”: “39e0dafc-015d-1000-918d-bee89ae2226e”,
“state”: “RUNNING”
}
}’ http://IP/nifi-api/processors/processorsID
? Stop the command (using the Rest API’s Put method) :
curl -i -X PUT -H ‘Content-Type:application/json’ -d ‘
{
“revision”: {
“clientId”: “586ec1d7-015d-1000-6459-28251212434e”,
“version”:17},
“component”: {
“id”: “39e0dafc-015d-1000-918d-bee89ae2226e”,
“state”: “STOPPED”
}
}’ http://IP/nifi-api/processors/processorsID

4 Summary and postscript

This article first introduced the Apache NiFi, then took the actual requirements of the author as an example, and explained the actual combat of the core component Processor of NiFi. Because NiFi still belongs to a top-level project Apache launch time is not long, is very powerful, but can access resources are still limited, in this paper, it is more of a throw brick process, its really powerful functions in data processing, welcome each other discussion of interest to you.

Actual Combat of Apache NiFi Processor,首發(fā)于Cobub。

]]>
How Much Do You Know about Distributed Coordination Service Zookeeper? http://www.dpkxx.com/en/how_much_do_you_know_about_distributed_coordinati_service_zonookeeper/ Tue, 05 Sep 2017 03:41:44 +0000 http://www.dpkxx.com/?p=6991 Zookeeper profile

Since we learned about a distributed framework (dubbo), which involved zookeeper, let's start with a brief introduction to zookeeper.Zookeeper is a distributed coordination service that manages a large number of hosts.

1. Distributed applications

Distributed applications can perform specific tasks by coordinating them between them, and a fast and effective way of running networks across multiple systems at a given time (and at the same time)
Distributed applications have two parts: the server and the client application.As shown in the figure below:

How Much Do You Know about Distributed Coordination Service Zookeeper?,首發(fā)于Cobub。

]]>
Zookeeper profile

Since we learned about a distributed framework (dubbo), which involved zookeeper, let’s start with a brief introduction to zookeeper.Zookeeper is a distributed coordination service that manages a large number of hosts.

1. Distributed applications

Distributed applications can perform specific tasks by coordinating them between them, and a fast and effective way of running networks across multiple systems at a given time (and at the same time)
Distributed applications have two parts: the server and the client application.As shown in the figure below:

2. Advantages of distributed applications

Reliability extensibility transparency

3. Services provided by zookeeper

Naming Service Configuration management Cluster management Node leader elections Locking and synchronization services Data registry

Basics of ZooKeeper

1. The architecture of ZooKeeper

Describe the “client-server architecture” of ZooKeeper, as shown below:

Some of the components of the ZooKeeper architecture are explained in the table below.
(1) Client: Client, send message to server.
(2) Server: a node of Server, ZooKeeper integration, providing all services to customers.
(3) Group: ZooKeeper server group.
(4) Leader: it performs automatic recovery, if any node of the connection fails the server node.
(5) Follower: follow the leader to indicate the server node

2. Hierarchical namespace

The following figure shows the tree structure used to represent the ZooKeeper file system in memory. The ZooKeeper node is called znode. Each znode is identified by a name and separated by the path (/) sequence.

The name space of zookeeper is made up of node znode, whose organization is similar to the file system, where each node corresponds to the directory and file, and the path is the unique identification. Unlike the file system, each node has the corresponding data content, and it can also have child nodes. Each znode in the ZooKeeper data model maintains a stat structure. A statistics (stat) just provides a znode metadata. It consists of version number, action control list (ACL), timestamp, and data length.

ZooKeeper components

There are two types of server under the same zookeeper service, one is leader server and the other is follower server. Leader is special because it has the power to decide. Each server under zookeeper’s entire service copies each component. The Replicated Database is a memory Database that contains all of the data.

Zookeeper as leader

Let’s analyze the election of a leading node in ZooKeeper collection. Consider that there are more than N nodes in the cluster. The process of leadership elections is as follows:
All nodes create a sequence, znode has the same path, / app/leader/guid_.
The collection of ZooKeeper will append the path of the additional 10 sequence Numbers.
For a given instance, it creates a minimum number of nodes in znode to become a leader and a follower of all other nodes.
Each follower node monitors the next smallest znode.

Zookeeper installation configuration

1. Installation of Java (abbreviated)

2. Installation of the ZooKeeper framework

(1) Download and tar open pressure (abbreviated)
(2) Create a configuration file open and edit conf/zoo. The CFG configuration files, and all of the following parameter is set to the starting point.

tickTime = 2000
dataDir = /path/to/zookeeper/data
clientPort = 2181
initLimit = 10
syncLimit = 5

(3) Launch the ZooKeeper server

$ bin/zkServer.sh start

(4) Start the CLI

$ bin/zkCli.sh

(5) Stop ZooKeeper server

$ bin/zkServer.sh stop

Zookeeper CLI

The ZooKeeper command-line interface (CLI) is used to interact with ZooKeeper integration. This is useful when debugging and using different options.
In order to perform the ZooKeeper CLI operation, the ZooKeeper server first to start (” bin/zkServer. Sh start “), and then use the ZooKeeper client (” bin/zkCli. Sh “). When the client starts, you can do the following: (1) create znodes, (2) to get the data, and (3) monitoring znode change, (4) set data, (5) create a znode znode, (6) to list a znode znode, check status (7), (8) to delete a znode.

1. Create Znodes

create  /path /data

2. Get data

get  /path 

3. Monitor

 get  /path [watch] 1

4. Set Data

 set  /path /data

5. Create child znode

create  /parent/path/subnode/path /data

6. List child znode

ls  /path

7. check mode

stat  /path

8. delete Znode

rmr  /path

Commonly used API of Zookeeper

ZooKeeper has a Java and C binding official API. The ZooKeeper community provides unofficial apis for most languages (.net, Python, etc.). Using the ZooKeeper API, applications can connect, interact, manipulate data, coordinate, and disconnect from ZooKeeper.

1. Basic knowledge of ZooKeeper’s API

The client should follow the following steps to integrate a clear interaction with ZooKeeper.
Connect to ZooKeeper. The ZooKeeper integration assigns the client’s session ID.
Send heartbeat to server regularly. Otherwise, the ZooKeeper integrates the expired session ID, so the client needs to reconnect.
Get/set as long as the znodes session ID is active.
Disconnect from ZooKeeper, when all tasks are completed. If the client is inactive for a long time, the ZooKeeper integration automatically disconnects the client.

2. Java binding

Let’s understand the most important ZooKeeper API in this chapter. The central part of the ZooKeeper API is the ZooKeeper class. It provides some options to connect to ZooKeeper integration in its construction, with the following methods:
? connect ? connected to the integration of ZooKeeper
? create ? create a znode
? exists ? Check that znode exists and information
? getData ? Get data from a specific znode
? setData ? Set the data to specific znode
? getChildren ? Gets all the available child nodes for a particular znode
? delete ? Get a specific znode and all its children
? close ? close junction

3. Connect to the ZooKeeper collection

The ZooKeeper class provides connectivity through its constructor. The constructor is as follows:

ZooKeeper(String connectionString, int sessionTimeout, Watcher watcher)

4. Create a Znode

The ZooKeeper class provides a way to create a new znode in a collection of ZooKeeper. Create the following methods:

create(String path, byte[] data, List acl, CreateMode createMode)

5. exists ?Check that znode exists and information

The exists method checks the existence of znode. If the specified znode exists, it returns a znode metadata. The exists method is as follows:

exists(String path, boolean watcher)

6. GetData method

The getData method is used to retrieve data that is connected to the specified znode and its state. The getData method is as follows:

getData(String path, Watcher watcher, Stat stat)

7. SetData method

The SetData method changes the data attached to the specified znode.The SetData method is as follows:

setData(String path, byte[] data, int version)

8. GetChildren method

GetChildren method to get a specific znode all child nodes. The getChildren method is as follows:

getChildren(String path, Watcher watcher)

9. Delete a Znode

The delete method deletes the specified znode. The delete method is as follows:

delete(String path, int version)

How Much Do You Know about Distributed Coordination Service Zookeeper?,首發(fā)于Cobub

]]>
Activiti analyses http://www.dpkxx.com/en/activiti_analyses/ Thu, 24 Aug 2017 06:41:54 +0000 http://www.dpkxx.com/?p=6931 The Activiti framework is one of the workflow frameworks that have developed rapidly in recent years with its open source features.Another workflow framework, JBPM5, is currently a very popular workflow framework.While these two frameworks are two different companies, the two frameworks have a lot to do with it.Tom Baeyens, the director of the Activiti workflow framework, has previously worked for jBoss, currently the publisher of JBPM5, as the chief architect of the previous JBPM4 workflow engine.But it was speculated that Tom Baeyens left jBoss to work for Alfresco because of internal contradictions within jBoss and a serious disagreement over the future version of the workflow engine.However, for months, Tom Baeyens has launched Activiti, an open-source workflow system based on the JBPM4 workflow engine.

Activiti analyses,首發(fā)于Cobub

]]>
The Activiti framework is one of the workflow frameworks that have developed rapidly in recent years with its open source features.Another workflow framework, JBPM5, is currently a very popular workflow framework.While these two frameworks are two different companies, the two frameworks have a lot to do with it.Tom Baeyens, the director of the Activiti workflow framework, has previously worked for jBoss, currently the publisher of JBPM5, as the chief architect of the previous JBPM4 workflow engine.But it was speculated that Tom Baeyens left jBoss to work for Alfresco because of internal contradictions within jBoss and a serious disagreement over the future version of the workflow engine.However, for months, Tom Baeyens has launched Activiti, an open-source workflow system based on the JBPM4 workflow engine.

The Activiti framework and the JBPM5 framework are both BPM (bus-process management) systems (compliant with the BPM specification), both of which are the BPMN2 Process modeling and execution environment.Both are open source projects – follow the ASL protocol (Apache software license).Both are from JBoss (Activiti5 is a derivative of jBPM4, and jBPM5 is based on Drools Flow).They were all mature, from scratch, and they started about two and a half years ago.Both have life-cycle management of human tasks.The only difference between Activiti5 and jBPM5 is that jBPM5 describes the human task and management life cycle based on the webservice-humantask standard.If you are interested in understanding the standards and their merits, please refer to the ws-ht specification.Both use a different style of Oryx process editor to model BPMN2.JBPM5 USES the open source project branch maintained by Intalio.Activiti5 USES a branch of Signavio maintenance.

As a workflow framework, activiti is widely used in many software development companies.So if you want to use the activiti open source workflow system to implement your own business system, the first step is to familiarize yourself with the BPMN 2.0 specification, which is not necessary.The BPMN 2.0 specification is implemented as a standard for the establishment of some basic models that may be encountered in the workflow business system.

The current mainstream Java development IDE is eclipse and intellij idea.Both development tools have the development of Activiti, with an interface – style process editor.Through the business process. The process editor will to parse and generate a business process. BPMN files, it is essentially a. XML file, the file of declarative illustrates the implementation of each process and business types, subsequent through Activiti can process engine to parse the XML file, and perform the corresponding operation and process of the jump function.

Here is the Github community address of Activiti, which can be downloaded for consultation.So what is the workflow engine that the Activiti workflow system is based on?The Activiti workflow provides a set of business system based on Java API interface, the process engine actually it is an instance of class class, only through this object can be obtained to all about the content of workflow business process and operation of all processes.Figure 1.1 is the workflow engine object and the objects it can derive:

The activiti. CFG. XML file as the core configuration files, the profile of integration in the Spring IOC container, can produce ProcessEngineConfiguration object, the object is the process engine configuration object, ProcessEngine object for the process engine object, the object is the core of workflow business system, all operations are derived by the object object implementation.Please refer to the Activiti5 API documentation for the operation of the object.

The current Activiti5 workflow business system involves 23 tables, as shown in figure 1.2. Of course, these tables are not required, and some tables that are not used are naturally unnecessary. Currently Activiti5’s workflow business system supports mainstream databases such as MySql, Oracle, and DB2, and the default database is H2. Related database configuration reference documentation.

The Activiti workflow business system integrates very well with Spring, which is a good feature for developers who are familiar with the Spring framework. But not with Activiti framework encapsulates the business functions, just realized the basic operation, can make the users better achieve certain function, but due to the Activiti and there is no effective encapsulation of rejected so if developers want to use the rejected function, it needs to himself through Activiti API manual packaging an interface.
So how do we use the Activiti workflow framework? Next, let’s briefly talk about the use of activiti. To use the activiti framework, take a look at activiti’s basic programming framework:

The first step is to develop the tools, which we have previously said can be used to integrate eclipse or intellij idea that integrates the acitiviti interface process editor functionality. So here we are using eclipse development tools. As shown in the figure below:

Step 2: you need to introduce the corresponding activiti jar package, which can be downloaded manually, or you can use the maven project management tool to manage the package. So here we use the maven project management tool:

Step 3: as we said earlier, the activiti workflow business system requires 23 tables, so the creation of the table is also essential, as we’ve said before. You can create it through activiti’s workflow engine object. We’re going to create it this way.

After completing the three steps above, then it’s easy to implement a leave business process. First, draw the flow diagram, set the process start node start and end node end, set the task userTask. As shown in the figure below:

It is important to note that the flow chart of every userTask need to set up the next person to deal with assignee, can use the process variables dynamic set processing, also can be directly fixed process, here we fixed processing first. As shown in the figure below:

Finished off after a business process, let’s take a look at what the BPMN files is a type of document, we use the TXT text editor to open, found that is similar to an XML file that is only suffix for. BPMN. As shown in the figure below:

The next step is to actually write the code. We said that in front of the activiti framework of workflow and Spring almost perfect fusion, but here we only involve the activiti analyses, so this article just USES the native API interface to realize the simple process for leave.
First, use the workflow engine configuration object to load the activiti.cfg.xml file, as mentioned earlier, activiti.cfg.xml is the configuration file for the workflow, and the contents of activiti are posted here, and the database used is mysql database.

You can use the process engine object ProcessEngineConfiguration loading configuration files, through the configuration object for ProcessEngine object, but we are not in the form of a configuration file to create this object, then all of the content can be used to complete all the contents of the object derives from function. Actually, the code that created the table just now can do this.
Use the ProcessEngine object to load the leave process configuration file to create the process definition template, which is the deployment process. Next, create a process instance object through the process definition template, fill in the leave form, and save the leave sheet, which is to create a leave process.

This code creates the definition of the leave process.

Among them, the engine for the ProcessEngine instance objects, processDefinitionId Id for the process definition, the value is generated when creating a process definition, run. StartProcessIntanceById to create a process instance object by process definition Id, this process instance object is corresponding to the leave process at a time.
In by creating a process for leave, and then submit the leave process (for example: the department manager, etc.) to the superior department, the department manager, ask for leave to show that the process is completed, 10002 a unique identifier for the task. Each task in the corresponding flowchart corresponds to a taskId.

Ask for leave in this article, the process of the examples are not covered by the operation of the database, that is to say, all the contents of the database is completed by activiti framework for us, we only need to implement the corresponding business processes. This greatly simplifies the relevant content of the workflow, but if you want to use the activiti workflow framework, you must have a thorough and detailed understanding of the API documentation. If you want to learn more about activiti’s database operations. Please refer to relevant information.

Activiti analyses,首發(fā)于Cobub。

]]>
(中文) RPC框架技術(shù)初窺 http://www.dpkxx.com/en/the_preliminary_discussion_of_rpc_technology/ Mon, 21 Aug 2017 01:37:12 +0000 http://www.dpkxx.com/?p=6919 (中文)

RPC是什么

RPC(Remote Procedure Call Protocol)——遠(yuǎn)程過(guò)程調(diào)用協(xié)議,它是一種通過(guò)網(wǎng)絡(luò)從遠(yuǎn)程計(jì)算機(jī)程序上請(qǐng)求服務(wù),而不需要了解底層網(wǎng)絡(luò)技術(shù)的協(xié)議。
RPC采用客戶機(jī)/服務(wù)器模式。請(qǐng)求程序就是一個(gè)客戶機(jī),而服務(wù)提供程序就是一個(gè)服務(wù)器。首先,客戶機(jī)調(diào)用進(jìn)程發(fā)送一個(gè)有進(jìn)程參數(shù)的調(diào)用信息到服務(wù)進(jìn)程,然后等待應(yīng)答信息。在服務(wù)器端,進(jìn)程保持睡眠狀態(tài)直到調(diào)用信息到達(dá)為止。當(dāng)一個(gè)調(diào)用信息到達(dá),服務(wù)器獲得進(jìn)程參數(shù),計(jì)算結(jié)果,發(fā)送答復(fù)信息,然后等待下一個(gè)調(diào)用信息,最后,客戶端調(diào)用進(jìn)程接收答復(fù)信息,獲得進(jìn)程結(jié)果,然后調(diào)用執(zhí)行繼續(xù)進(jìn)行。
以上是百度百科對(duì)RPC的解釋。
一個(gè)通俗的描述是:客戶端在不知道調(diào)用細(xì)節(jié)的情況下,調(diào)用存在于遠(yuǎn)程計(jì)算機(jī)上的某個(gè)對(duì)象,就像調(diào)用本地應(yīng)用程序中的對(duì)象一樣。

(中文) RPC框架技術(shù)初窺,首發(fā)于Cobub。

]]>
What is RPC?

RPC (Remote Procedure Call Protocol) – Remote Procedure Call Protocol, which is a Protocol for requesting services from Remote computer programs over a network without the need to understand underlying network technologies.
RPC USES client/server mode.A requester is a client, and a service provider is a server.First, the client call process sends a process parameter call to the service process and waits for the reply message.On the server side, the process stays asleep until the call is reached.When a call information to the server process parameters, the calculation results, send a reply information, and then wait for the next call information, in the end, the client calling process to receive a reply information, obtain process as a result, and then call to continue execution.
The above is baidu encyclopedia’s explanation to RPC.
A popular description is that the client invokes an object that exists on a remote machine, just like the object in a local application, without knowing the invocation details.

The background of RPC

In the early days of the single machine, there were many processes running on one computer.If A process requires A drawing function, B process also requires A drawing function, and the programmer must write A drawing of the function for both processes.Isn’t that the whole thing?The IPC (inter-process communication, the inter-process communication between processes running in a single machine) occurs.OK, now that A has the function of drawing, B will call the drawing function on process A.
In the Internet age, everyone’s computers are connected.Previous programs can only invoke processes on their own computers, can they invoke processes on other machines?The programmer then extends the IPC to the network, which makes RPC.
This time the drawing function can be used as an independent service for clients.

RPC framework features

The RPC protocol

Since the agreement is just a set of specifications, it needs someone to follow the specification.Currently, typical RPC implementations include: Dubbo, Thrift, GRPC, Hetty, etc.

Network protocol and network IO transparent

Now that the RPC client considers itself to be calling the local object.So the transport layer USES TCP/UDP or the HTTP protocol, which is some other network protocol that doesn’t need to be concerned.Now that the network protocol is transparent to it, there is no need to care about which network IO model caller is used during the call process.

The information format is transparent to it

In a local application, an object call needs to pass some parameters, and a call result is returned.The inside of the object is how to use these parameters and calculate the result of the processing, and the caller is not concerned.So for the RPC, these parameters will be in a message format is passed to another computer on the network, how the information format is built up, the caller is not need to care about.

Language skills

The caller doesn’t actually know what language the remote server’s application is using.So for the caller, whatever language is used on the server side, this call should be successful, and the return value should be in accordance with the caller procedures described in the form of language can understand.

How the RPC framework works


1. Invoking the client handle; Execute transmission parameter
2. Call the local system kernel to send network messages
3. The message is sent to the remote host
4. The server handle gets the message and takes the parameters
5. Perform remote procedure
6. The execution process returns the result to the server handle
7. The server handle returns the result and invokes the remote system kernel
8. The message returns to the local host
9. The client handle receives the message from the kernel
10. The customer receives the data from the handle

Work on your own RPC framework

Code implementation to do the work

1. Design external interfaces

public interface IService extends Remote {  
  
    public String queryName(String no) throws RemoteException;  
  
}

2. Service implementation of the server

public class ServiceImpl extends UnicastRemoteObject implements IService {  
    
    private static final long serialVersionUID = 682805210518738166L;  
    
    protected ServiceImpl() throws RemoteException {  
        super();  
    }  
   
    @Override  
    public String queryName(String no) throws RemoteException {  
        // the concrete realization of the method    
        return String.valueOf(System.currentTimeMillis());  
    }  
}

3. RMI server implementation

public class Server {  
    public static void main(String[] args) {    
        Registry registry = null;  
        try {  
            // create a service registry manager
            registry = LocateRegistry.createRegistry(8088);  
        } catch (RemoteException e) {        
        }  
        try {  
            //create a service 
            ServiceImpl server = new ServiceImpl();  
            // name the service binding 
            registry.rebind("vince", server);   
        } catch (RemoteException e) {                
        }  
    }  
}  

4. client-side implementation

public class Client {  
    public static void main(String[] args) {  
        Registry registry = null;  
        try {  
            // get the service registry manager  
            registry = LocateRegistry.getRegistry("127.0.0.1",8088);   
        } catch (RemoteException e) {  
        }  
        try {  
            // get the service by name   
            IService server = (IService) registry.lookup("vince");  
            // call the remote method to get the result.
            String result = server.queryName("ha ha ha ha");   
        } catch (Exception e)  {            
        }  
    }  
}  

(中文) RPC框架技術(shù)初窺,首發(fā)于Cobub。

]]>
(中文) 深入解析jquery實(shí)現(xiàn)原理第一章 http://www.dpkxx.com/en/the_first_chapter_of_the_in-depth_analysis_of_j-query_implementation/ Thu, 17 Aug 2017 01:53:30 +0000 http://www.dpkxx.com/?p=6912 JQuery is a very good JavaScript library, which greatly enhances the development experience of the front end js, so I recently looked at the source code of JQuery and wanted to share some of my understanding with you.

(中文) 深入解析jquery實(shí)現(xiàn)原理第一章,首發(fā)于Cobub

]]>
JQuery is a very good JavaScript library, which greatly enhances the development experience of the front end js, so I recently looked at the source code of JQuery and wanted to share some of my understanding with you.
First, let’s look at the overall structure code of jQuery 1-1:

(function (window, undefined) {
/ / construct JQuery objects
Var jQuery = (function () {
Var jQuery = function (the selector, context) {
Return new jQuery. Fn. Int (selector, context, rootjQuery);
}
Return the jQuery;
}) ();
/ / tools Utilities
/ / callback function list Callbacks Object
/ / asynchronous queue Deferred Object
/ / browser function test Support
/ / Data cache Data
/ / Queue Queue
/ / Attribute operation Attribute
/ / Events system Events
/ / selector Sizzle
/ / DOM traversal Traversing
/ / DOM Manipulation operation
/ / style operation CSS (computing style, inline style)

Code 1-1
From the above code, we can see that all of the code for jquery is written in an anonymous function that is executed immediately, called “self-invoking anonymous function”. When the browser loads jQuery’s js file, the self-invoking anonymous function is executed immediately, initializing the various modules to jQuery.
First of all tell me the advantages of the use the anonymous function called, to create a call anonymous function is equivalent to create a special function scope, the function of code won’t and existing functions, methods, and variables with the same. So the jQuery code will not be disturbed by other code, and it won’t contaminate global variables, affecting other code. There are two ways of writing anonymous functions, as follows:

/ / write 1
(function () {
/ /...
} ());
/ / write 2
!The function () {
/ /...
} ();

Code 1-2
From code 1-1, we can see that jQuery adds jQuery to the window object at the end of the invocation of the anonymous function, so that the variable jQuery becomes an open global variable, and the rest will be private. To the anonymous function called Settings window, and introduced to the window object, the window object into a local variable can be used as local variables) (the function parameters, such as in the jQuery code block access window object, do not need to return the top-level scope, can quickly access the window object.
Set undefined to the self-invoking anonymous function, because the special value undefined is a property of the window object, for example:

alert("undefined" in window);          //true

The above code will pop true. In this way, you can undefined the value of the parameter undefined, because undefined can be overridden as the new value. You can try modifying undefined values with the following code:

undefined = "now is's defined";
alert( undefined );

Of course, this method is not supported in the high version of the browser, such as IE9.0, Chrome 17.0.963.56, and Firefox 4.0.
Usually in JavaScript, if statements are placed in separate rows, the semicolon (;)It’s optional to write, but it’s possible to omit a semicolon before or after an anonymous function call.The following code execution throws an exception:
Case 1

var n = 1
(function(){})()
//TypeError: number is not function

In the code above, the first pair of parentheses from the call to the anonymous function will be treated as a function call if the previous line of the anonymous function has not been added.
Case 2

(function(){})()
(function(){})()
//TypeError: undefined is not function

In the code above, the first pair of parentheses from the next line of the anonymous function will be treated as a function call if it is not added at the end of the first self-invoking anonymous function.So, when using a self-invoking anonymous function, it’s best not to omit the semicolons before and after calling the anonymous function.
The jQuery object is a class array object with continuous integer attributes, length attributes, and a large number of jQuery methods.The jQuery object is created by the constructor jQuery (), and $() is the abbreviation for jQuery ().If you call the constructor jQuery (), the logic for creating a jQuery object will be different.The constructor jQuery () has 7 USES, as shown below:

This time, I will tell you about the overall structure of jQuery.I recommend you to look at the principles of design and implementation of jQuery architecture in depth, and the technical points of jQuery are very detailed.

(中文) 深入解析jquery實(shí)現(xiàn)原理第一章,首發(fā)于Cobub

]]>