Category Archives: Web

程序员需要知道的字符编码知识

字符编码的问题看似很小,经常被技术人员忽视,但是很容易导致一些莫名其妙的问题。这里总结了一下字符编码的一些普及性的知识,希望对大家有所帮助。

还是得从ASCII码说起

说到字符编码,不得不说ASCII码的简史。计算机一开始发明的时候是用来解决数字计算的问题,后来人们发现,计算机还可以做更多的事,例如文本处理。但由于计算机只识“数”,因此人们必须告诉计算机哪个数字来代表哪个特定字符,例如65代表字母‘A’,66代表字母‘B’,以此类推。但是计算机之间字符-数字的对应关系必须得一致,否则就会造成同一段数字在不同计算机上显示出来的字符不一样。因此美国国家标准协会ANSI制定了一个标准,规定了常用字符的集合以及每个字符对应的编号,这就是ASCII字符集(Character Set),也称ASCII码。

当时的计算机普遍使用8比特字节作为最小的存储和处理单元,加之当时用到的字符也很少,26个大小写英文字母还有数字再加上其他常用符号,也不到100个,因此使用7个比特位就可以高效的存储和处理ASCII码,剩下最高位1比特被用作一些通讯系统的奇偶校验。

注意,字节代表系统能够处理的最小单位,不一定是8比特。只是现代计算机的事实标准就是用8比特来代表一个字节。在很多技术规格文献中,为了避免产生歧义,更倾向于使用8位组(Octet)而不是字节(Byte)这个术语来强调8个比特的二进制流。下文中为了便于理解,我会延用大家熟悉的“字节”这个概念。

ASCII字符集由95个可打印字符(0×20-0x7E)和33个控制字符(0×00-0×19,0x7F)组成。可打印字符用于显示在输出设备上,例如荧屏或者打印纸上,控制字符用于向计算机发出一些特殊指令,例如0×07会让计算机发出哔的一声,0×00通常用于指示字符串的结束,0x0D和0x0A用于指示打印机的打印针头退到行首(回车)并移到下一行(换行)。

那时候的字符编解码系统非常简单,就是简单的查表过程。例如将字符序列编码为二进制流写入存储设备,只需要在ASCII字符集中依次找到字符对应的字节,然后直接将该字节写入存储设备即可。解码二进制流的过程也是类似。

OEM字符集的衍生

当计算机开始发展起来的时候,人们逐渐发现,ASCII字符集里那可怜的128个字符已经不能再满足他们的需求了。人们就在想,一个字节能够表示的数字(编号)有256个,而ASCII字符只用到了0×00~0x7F,也就是占用了前128个,后面128个数字不用白不用,因此很多人打起了后面这128个数字的主意。可是问题在于,很多人同时有这样的想法,但是大家对于0×80-0xFF这后面的128个数字分别对应什么样的字符,却有各自的想法。这就导致了当时销往世界各地的机器上出现了大量各式各样的OEM字符集。

下面这张表是IBM-PC机推出的其中一个OEM字符集,字符集的前128个字符和ASCII字符集的基本一致(为什么说基本一致呢,是因为前32个控制字符在某些情况下会被IBM-PC机当作可打印字符解释),后面128个字符空间加入了一些欧洲国家用到的重音字符,以及一些用于画线条画的字符。

事实上,大部分OEM字符集是兼容ASCII字符集的,也就是说,大家对于0×00~0x7F这个范围的解释基本是相同的,而对于后半部分0×80~0xFF的解释却不一定相同。甚至有时候同样的字符在不同OEM字符集中对应的字节也是不同的。

不同的OEM字符集导致人们无法跨机器交流各种文档。例如职员甲发了一封简历résumés给职员乙,结果职员乙看到的却是rגsumגs,因为é字符在职员甲机器上的OEM字符集中对应的字节是0×82,而在职员乙的机器上,由于使用的OEM字符集不同,对0×82字节解码后得到的字符却是ג

多字节字符集(MBCS)和中文字符集

上面我们提到的字符集都是基于单字节编码,也就是说,一个字节翻译成一个字符。这对于拉丁语系国家来说可能没有什么问题,因为他们通过扩展第8个比特,就可以得到256个字符了,足够用了。但是对于亚洲国家来说,256个字符是远远不够用的。因此这些国家的人为了用上电脑,又要保持和ASCII字符集的兼容,就发明了多字节编码方式,相应的字符集就称为多字节字符集。例如中国使用的就是双字节字符集编码(DBCS,Double Byte Character Set)。

对于单字节字符集来说,代码页中只需要有一张码表即可,上面记录着256个数字代表的字符。程序只需要做简单的查表操作就可以完成编解码的过程。

代码页是字符集编码的具体实现,你可以把他理解为一张“字符-字节”映射表,通过查表实现“字符-字节”的翻译。下面会有更详细的描述。

而对于多字节字符集,代码页中通常会有很多码表。那么程序怎么知道该使用哪张码表去解码二进制流呢?答案是,根据第一个字节来选择不同的码表进行解析

例如目前最常用的中文字符集GB2312,涵盖了所有简体字符以及一部分其他字符;GBK(K代表扩展的意思)则在GB2312的基础上加入了对繁体字符等其他非简体字符(GB18030字符集不是双字节字符集,我们在讲Unicode的时候会提到)。这两个字符集的字符都是使用1-2个字节来表示。Windows系统采用936代码页来实现对GBK字符集的编解码。在解析字节流的时候,如果遇到字节的最高位是0的话,那么就使用936代码页中的第1张码表进行解码,这就和单字节字符集的编解码方式一致了。

当字节的高位是1的时候,确切的说,当第一个字节位于0x81–0xFE之间时,根据第一个字节不同找到代码页中的相应的码表,例如当第一个字节是0x81,那么对应936中的下面这张码表:

(关于936代码页中完整的码表信息,参见MSDN:http://msdn.microsoft.com/en-us/library/cc194913%28v=MSDN.10%29.aspx.)

按照936代码页的码表,当程序遇到连续字节流0×81 0×40的时候,就会解码为“丂”字符。

ANSI标准、国家标准、ISO标准

不同ASCII衍生字符集的出现,让文档交流变得非常困难,因此各种组织都陆续进行了标准化流程。例如美国ANSI组织制定了ANSI标准字符编码(注意,我们现在通常说到ANSI编码,通常指的是平台的默认编码,例如英文操作系统中是ISO-8859-1,中文系统是GBK),ISO组织制定的各种ISO标准字符编码,还有各国也会制定一些国家标准字符集,例如中国的GBK,GB2312和GB18030。

操作系统在发布的时候,通常会往机器里预装这些标准的字符集还有平台专用的字符集,这样只要你的文档是使用标准字符集编写的,通用性就比较高了。例如你用GB2312字符集编写的文档,在中国大陆内的任何机器上都能正确显示。同时,我们也可以在一台机器上阅读多个国家不同语言的文档了,前提是本机必须安装该文档使用的字符集。

Unicode的出现

虽然通过使用不同字符集,我们可以在一台机器上查阅不同语言的文档,但是我们仍然无法解决一个问题:在一份文档中显示所有字符。为了解决这个问题,我们需要一个全人类达成共识的巨大的字符集,这就是Unicode字符集。

Unicode字符集概述

Unicode字符集涵盖了目前人类使用的所有字符,并为每个字符进行统一编号,分配唯一的字符码(Code Point)。Unicode字符集将所有字符按照使用上的频繁度划分为17个层面(Plane),每个层面上有216=65536个字符码空间。

其中第0个层面BMP,基本涵盖了当今世界用到的所有字符。其他的层面要么是用来表示一些远古时期的文字,要么是留作扩展。我们平常用到的Unicode字符,一般都是位于BMP层面上的。目前Unicode字符集中尚有大量字符空间未使用。

编码系统的变化

在Unicode出现之前,所有的字符集都是和具体编码方案绑定在一起的,都是直接将字符和最终字节流绑定死了,例如ASCII编码系统规定使用7比特来编码ASCII字符集;GB2312以及GBK字符集,限定了使用最多2个字节来编码所有字符,并且规定了字节序。这样的编码系统通常用简单的查表,也就是通过代码页就可以直接将字符映射为存储设备上的字节流了。例如下面这个例子:

这种方式的缺点在于,字符和字节流之间耦合得太紧密了,从而限定了字符集的扩展能力。假设以后火星人入住地球了,要往现有字符集中加入火星文就变得很难甚至不可能了,而且很容易破坏现有的编码规则。

因此Unicode在设计上考虑到了这一点,将字符集和字符编码方案分离开。

也就是说,虽然每个字符在Unicode字符集中都能找到唯一确定的编号(字符码,又称Unicode码),但是决定最终字节流的却是具体的字符编码。例如同样是对Unicode字符“A”进行编码,UTF-8字符编码得到的字节流是0×41,而UTF-16(大端模式)得到的是0×00 0×41。

常见的Unicode编码

UCS-2/UTF-16

如果要我们来实现Unicode字符集中BMP字符的编码方案,我们会怎么实现?由于BMP层面上有216=65536个字符码,因此我们只需要两个字节就可以完全表示这所有的字符了。

举个例子,“中”的Unicode字符码是0x4E2D(01001110 00101101),那么我们可以编码为01001110 00101101(大端)或者00101101 01001110 (小端)。

UCS-2和UTF-16对于BMP层面的字符均是使用2个字节来表示,并且编码得到的结果完全一致。不同之处在于,UCS-2最初设计的时候只考虑到BMP字符,因此使用固定2个字节长度,也就是说,他无法表示Unicode其他层面上的字符,而UTF-16为了解除这个限制,支持Unicode全字符集的编解码,采用了变长编码,最少使用2个字节,如果要编码BMP以外的字符,则需要4个字节结对,这里就不讨论那么远,有兴趣可以参考维基百科:UTF-16/UCS-2

Windows从NT时代开始就采用了UTF-16编码,很多流行的编程平台,例如.Net,Java,Qt还有Mac下的Cocoa等都是使用UTF-16作为基础的字符编码。例如代码中的字符串,在内存中相应的字节流就是用UTF-16编码过的。

UTF-8

UTF-8应该是目前应用最广泛的一种Unicode编码方案。由于UCS-2/UTF-16对于ASCII字符使用两个字节进行编码,存储和处理效率相对低下,并且由于ASCII字符经过UTF-16编码后得到的两个字节,高字节始终是0×00,很多C语言的函数都将此字节视为字符串末尾从而导致无法正确解析文本。因此一开始推出的时候遭到很多西方国家的抵触,大大影响了Unicode的推行。后来聪明的人们发明了UTF-8编码,解决了这个问题。

UTF-8编码方案采用1-4个字节来编码字符,方法其实也非常简单。

(上图中的x代表Unicode码的低8位,y代表高8位)

对于ASCII字符的编码使用单字节,和ASCII编码一摸一样,这样所有原先使用ASCII编解码的文档就可以直接转到UTF-8编码了。对于其他字符,则使用2-4个字节来表示,其中,首字节前置1的数目代表正确解析所需要的字节数,剩余字节的高2位始终是10。例如首字节是1110yyyy,前置有3个1,说明正确解析总共需要3个字节,需要和后面2个以10开头的字节结合才能正确解析得到字符

关于UTF-8的更多信息,参考维基百科:UTF-8

GB18030

任何能够将Unicode字符映射为字节流的编码都属于Unicode编码。中国的GB18030编码,覆盖了Unicode所有的字符,因此也算是一种Unicode编码。只不过他的编码方式并不像UTF-8或者UTF-16一样,将Unicode字符的编号通过一定的规则进行转换,而只能通过查表的手段进行编码。

关于GB18030的更多信息,参考:GB18030

Unicode相关的常见问题

Unicode是两个字节吗?

Unicode只是定义了一个庞大的、全球通用的字符集,并为每个字符规定了唯一确定的编号,具体存储为什么样的字节流,取决于字符编码方案。推荐的Unicode编码是UTF-16和UTF-8。

带签名的UTF-8指的是什么意思?

带签名指的是字节流以BOM标记开始。很多软件会“智能”的探测当前字节流使用的字符编码,这种探测过程出于效率考虑,通常会提取字节流前面若干个字节,看看是否符合某些常见字符编码的编码规则。由于UTF-8和ASCII编码对于纯英文的编码是一样的,无法区分开来,因此通过在字节流最前面添加BOM标记可以告诉软件,当前使用的是Unicode编码,判别成功率就十分准确了。但是需要注意,不是所有软件或者程序都能正确处理BOM标记,例如PHP就不会检测BOM标记,直接把它当普通字节流解析了。因此如果你的PHP文件是采用带BOM标记的UTF-8进行编码的,那么有可能会出现问题。

Unicode编码和以前的字符集编码有什么区别?

早期字符编码、字符集和代码页等概念都是表达同一个意思。例如GB2312字符集、GB2312编码,936代码页,实际上说的是同个东西。但是对于Unicode则不同,Unicode字符集只是定义了字符的集合和唯一编号,Unicode编码,则是对UTF-8、UCS-2/UTF-16等具体编码方案的统称而已,并不是具体的编码方案。所以当需要用到字符编码的时候,你可以写gb2312,codepage936,utf-8,utf-16,但请不要写unicode(看过别人在网页的meta标签里头写charset=unicode,有感而发)。

乱码问题

乱码指的是程序显示出来的字符文本无法用任何语言去解读。一般情况下会包含大量?或者�。乱码问题是所有计算机用户或多或少会遇到的问题。造成乱码的原因就是因为使用了错误的字符编码去解码字节流因此当我们在思考任何跟文本显示有关的问题时,请时刻保持清醒:当前使用的字符编码是什么。只有这样,我们才能正确分析和处理乱码问题。

例如最常见的网页乱码问题。如果你是网站技术人员,遇到这样的问题,需要检查以下原因:

  • 服务器返回的响应头Content-Type没有指明字符编码
  • 网页内是否使用META HTTP-EQUIV标签指定了字符编码
  • 网页文件本身存储时使用的字符编码和网页声明的字符编码是否一致

注意,网页解析的过程如果使用的字符编码不正确,还可能会导致脚本或者样式表出错。具体细节可以参考我以前写过的文章:文档字符集导致的脚本错误和Asp.Net页面的编码问题。

不久前看到某技术论坛有人反馈,WinForm程序使用Clipboard类的GetData方法去访问剪切板中的HTML内容时会出现乱码的问题,我估计也是由于WinForm在获取HTML文本的时候没有用对正确的字符编码导致的。Windows剪贴板只支持UTF-8编码,也就是说你传入的文本都会被UTF-8编解码。这样一来,只要两个程序都是调用Windows剪切板API编程的话,那么复制粘贴的过程中不会出现乱码。除非一方在获取到剪贴板数据之后使用了错误的字符编码进行解码,才会得到乱码(我做了简单的WinForm剪切板编程实验,发现GetData使用的是系统默认编码,而不是UTF-8编码)。

关于乱码中出现?或者�,这里需要额外提一下,当程序使用特定字符编码解析字节流的时候,一旦遇到无法解析的字节流时,就会用?或者�来替代。因此,一旦你最终解析得到的文本包含这样的字符,而你又无法得到原始字节流的时候,说明正确的信息已经彻底丢失了,尝试任何字符编码都无法从这样的字符文本中还原出正确的信息来

必要的术语解释

字符集(Character Set),字面上的理解就是字符的集合,例如ASCII字符集,定义了128个字符;GB2312定义了7445个字符。而计算机系统中提到的字符集准确来说,指的是已编号的字符的有序集合(不一定是连续)

字符码(Code Point)指的就是字符集中每个字符的数字编号。例如ASCII字符集用0-127这连续的128个数字分别表示128个字符;GBK字符集使用区位码的方式为每个字符编号,首先定义一个94X94的矩阵,行称为“区”,列称为“位”,然后将所有国标汉字放入矩阵当中,这样每个汉字就可以用唯一的“区位”码来标识了。例如“中”字被放到54区第48位,因此字符码就是5448。而Unicode中将字符集按照一定的类别划分到0~16这17个层面(Planes)中,每个层面中拥有216=65536个字符码,因此Unicode总共拥有的字符码,也即是Unicode的字符空间总共有17*65536=1114112。

编码的过程是将字符转换成字节流。

解码的过程是将字节流解析为字符。

字符编码(Character Encoding)是将字符集中的字符码映射为字节流的一种具体实现方案。例如ASCII字符编码规定使用单字节中低位的7个比特去编码所有的字符。例如‘A’的编号是65,用单字节表示就是0×41,因此写入存储设备的时候就是b’01000001’。GBK编码则是将区位码(GBK的字符码)中的区码和位码的分别加上0xA0(160)的偏移(之所以要加上这样的偏移,主要是为了和ASCII码兼容),例如刚刚提到的“中”字,区位码是5448,十六进制是0×3630,区码和位码分别加上0xA0的偏移之后就得到0xD6D0,这就是“中”字的GBK编码结果。

代码页(Code Page)一种字符编码具体形式。早期字符相对少,因此通常会使用类似表格的形式将字符直接映射为字节流,然后通过查表的方式来实现字符的编解码。现代操作系统沿用了这种方式。例如Windows使用936代码页、Mac系统使用EUC-CN代码页实现GBK字符集的编码,名字虽然不一样,但对于同一汉字的编码肯定是一样的。

大小端的说法源自《格列佛游记》。我们知道,鸡蛋通常一端大一端小,小人国的人们对于剥蛋壳时应从哪一端开始剥起有着不一样的看法。同样,计算机界对于传输多字节字(由多个字节来共同表示一个数据类型)时,是先传高位字节(大端)还是先传低位字节(小端)也有着不一样的看法,这就是计算机里头大小端模式的由来了。无论是写文件还是网络传输,实际上都是往流设备进行写操作的过程,而且这个写操作是从流的低地址向高地址开始写(这很符合人的习惯),对于多字节字来说,如果先写入高位字节,则称作大端模式。反之则称作小端模式。也就是说,大端模式下,字节序和流设备的地址顺序是相反的,而小端模式则是相同的。一般网络协议都采用大端模式进行传输。

——Kevin Yang

参考链接:

from:http://www.techug.com/character-set

Mary Meeker 2014互联网趋势报告 [收藏版]

(更新完整演讲视频)Mary Meeker 2014互联网趋势报告 [收藏版]

KPCB 的互联网女皇 Mary Meeker 所发布的 2014 版互联网趋势报告长达 164 页,通篇包含了来源众多的翔实数据以及 Meeker 的个人展望。本文完全编译自 Meeker 的 2014 版趋势报告。为了阅读方便,本文做出了顺序上的适当调整(比如未摘录涉及美国移民数据的 Immigration Update )与内容上的选择(还有极少来自其它来源的信息),如果你希望读到未经处理的原文,可以在此处下载 PDF 文件或在其官网上观看。

(2014 5.31 08:20)更新了Mary Meeker 在Code Conference 上解读该趋势报告的完整演讲视频,强烈推荐一看!

视频地址在此

互联网关键趋势

移动端的巨大势能将为增速逐渐下降的互联网提供动力,而在线视频则将是推动移动数据增长的动力来源。2013 年全球互联网用户数量增长了 9%,作为对比,2012 的数字为 11%。未来这种增长还将继续减速。

移动端网络流量占全球互联网流量的比例以每年 1.5 倍的速度增长,这一增速没有延缓的趋势。

1)数据消耗以及用户增长的主要趋势包括:

  •   从整体上来看,互联网的年同比增长或衰减率小于 10%。那些盈利较为困难的发展中市场有着最高的互联网增长率。
  •   智能手机用户群体以 20% 的速率增长(但增速在放缓),主要来自于中国、印度和巴西等新兴地区
  •   平板电脑仍然处于发展早期,增速高达 52%
  • 移动数据流量的规模正以 81% 的惊人速度暴涨,视频是强劲动力的来源

2)智能手机

从大约 2010 年第二季度开始,全球范围智能手机用户数量在整体手机用户数量中的比重开始稳步增长,截止 2013 年第四季度,这个数字已经达到 30%。全球手机用户数量为 52 亿。智能手机的增长空间依旧可观,业界普遍认为新兴市场的中低端机型是主要增长点。

2013 年第四季度全球智能手机发货量为 3.15 亿,作为对比,2009 年第四季度的数字为 0.54 亿,增幅约为 480%。

3)平板电脑

平板电脑 2013 年发货量增长达到了 52%,结合包括台式及笔记本电脑在内的 PC 增长曲线来看,平板电脑的增长速率是空前的。即使如此,这种增长势头还将继续,2013 年平板电脑用户数量(4.39 亿)仅为笔记本电脑(7.89 亿)的 56%,智能手机(16 亿)的 28% 或电视(55亿)的 8%。这意外的平板市场还蕴藏着巨大的增长空间。

4)持续增长中的移动互联网

终端数量的增长自然而然地导致移动网络流量的持续上升,2014 年 5 月,全球移动互联网流量已经占据互联网整体流量的 25%。去年同期的数据为 14%。非洲地区的移动/互联网比例最高,为 38%,而南美地区则是这一数字增长最快的市场。

4)移动 OS,美国制造

纵观全球移动操作市场份额,美国制造的 iOS、Windows Phone 以及安卓几乎垄断了整个市场。从 2005 至 2013 的八年间,这三个操作系统的市场份额从 5% 增长至 97%,安卓系统的增长尤为惊人。

5)移动平台的潜力

根据摩根斯坦利的报告,每一个计算平台的规模都是上一代的 10 倍。包括智能手机、平板电脑、家庭娱乐系统在内的新型设备将在移动互联网时代网络达到 100 亿的惊人规模。这不仅要归功于全新种类的设备,深层次多方面的软硬件整合也起到了重要作用。

* *

诱人的广告

在线广告业务依旧强势,从 09 年开始便以明显的年同比增长,去年增长了 16%。移动端广告业务更是如此,移动广告占整体的比例在过去一年里提高了近一半,达到 11% 的水平。

通过数据分析 Google、Facebook 以及 Twitter 的移动广告盈利情况,对比三者今年第一季度平均每用户产生收入(ARPU)的水平可以发现,Google 以 45 美元遥遥领先,Facebook 则是 7.24 美元,Twitter 3.55 美元,后两者正以惊人的速度增长。

用户对平台的依赖程度不断加深也是移动平台持续爆发的原因之一。2013 年美国用户网上观看多媒体的时间增长了 25%,花在广告上的时间也有 22% 的增幅。而在移动端观看多媒体的时长则增长了 20%,移动广告 4%,预计光是美国市场的移动端广告业务潜在价值就达到 300 亿美元以上。

移动广告业务炙手可热,各大公司都推出及改良了各自的广告网络,但数据却显示,应用程序在整个移动端盈利模式中所占的份额仍远高于广告业务,2013 年两者共同价值约 380 亿美元,移动应用占据了 68%。

* *

网络安全问题需要引起注意

活跃的攻击组织的数量持续上升,95% 网络系统在某些方面存在漏洞,而移动平台的崛起也将导致更多的直接攻击。

网络公司甚至是传统零售连锁的客户数据遭受窃取的新闻在过去一年里不断出现,中国有携程信用卡泄密,小米被拖库等例子,美国则有 Target 信用卡泄露和 eBay 遭受攻击事件。甚至连物联网的安全问题也开始进入人们视线。

* *

科技股,教育与医疗

1)科技公司

Facebook 190 亿美元收购 WhatsApp,Snapchat 拒绝各巨头天价收购邀请,以及中国科技公司纷纷赴美上市的新闻让很多人开始产生疑问:这些科技公司的价值明显被高估了吧?

从账面看,也许是的,但结合历史记录的话…

2013 年全球科技股 IPO 的数额(80 亿美元)只有 1999 年峰值(310 亿美元)的 17%,IPO 数量(41 起)也只有同年峰值(310 起)的 13%。

2013 年美国科技公司收到的风投数量与金额也难以与 2000 年高峰相比。

科技公司占标普 500 指数价值份额也从 2000 年的 35% 下降到如今的 19%。

2)教育市场(美国)也将迎来可能的爆发,原因如下:

  •   大部分人认同教育的重要性,十个人中有八人认为接受教育对他们具有极其重要的意义
  •   互联网让更高效更便宜的个性化教学成为可能
  •   与教育有关的创业成本开始下降,而传播渠道又开始扩张

更多与教育有关的数据:2500 万人通过 Duolingo 学习新语言(学习乐趣增加),1200 万师生家长通过 Rmind 101 发送了 5 亿多条信息(沟通变方便),3500 万师生家长通过 ClassDojo 来帮助学生建立行为规范(反馈变得及时),另外还有 Coursera 的 700 万在线学生,iTunues U Open University 的 6500 万次课程下载等等。

3)资源浪费、雇主负担加重及不健康的生活习惯引起的健康问题开始增加的状况,加上科技发展的客观条件,使得美国健康医疗市场即将引来重大改变。

对科技公司来说,这是一件好事,互联网、可穿戴以及量化自我等软硬件公司都可以从这次改变中获益。另一方面,来自政府的帮助也会起到推波助澜的作用。

* *

继续重新想像…

1)即时通信应用

全球 OTT 通信服务在 5 年时间内收获超过 10 亿用户。

如果把每名用户想像成一个点,两者之间的关系想像成一条线的话,那么以 Facebook 为代表的传统社交网络会是一张更广泛而脆弱的网,而微信与 WhatsApp 等应用所编织的则是更小更坚固的网络。后者的用户之间的交流更加频繁与私人,两点之间的链接也就更加牢固。

一图胜千言。图片及视频分享服务正在迅速成长为取代传统文字信息的新交流形式,与移动终端的普及相结合,Pinterest、Instagram 和 Snapchat 等服务的增长更是大幅度推动了网络流量的增长。

2)应用程序

以 Facebook 为代表,“一个万能应用” 的策略开始让位于 “多个各司其职的移动应用” 策略。如今,Facebook 旗下已经拥有至少六款负责不同功能的移动程序:图片分享 Instagram,即时通信 Facebook Messenger 与 WhatsApp (马扎认为这两个的应用场景不同)以及新闻阅读 Paper 等。

这种 “解压缩” 还在继续,最新的例子是签到鼻祖 Foursquare 将其招牌签到功能剥离出来成立了专司签到及社交的移动程序 Swarm。TechCrunch 的 Matthew Panzarino 认为未来的应用程序将是严格按照其功能而生的,用户启动它们的同时就已经明确知道自己需要的是什么功能。在有用时这些细分应用挺身而出,否则就应该隐身做好备胎。另外,越来越多的感应器的陪伴也使得这些程序在后台实时感知着不同场景,以更人性化的方式预估用户行为并做出针对性反应。

3)内容和分发渠道

社交网络已经成了内容传播的高效渠道,Shareaholic 的最新数据显示,Facebook、Pinterest 和 Twitter 导引的流量份额分别达到了 21%,7% 和 1%。这些内容在 Twitter 上平均只需要 6.5 小时就能完成一半的导引流量,而在 Facebook 上则需要 9 个小时。

利用社交网络发布新闻已经成为许多传统及新兴内容发布商的首选,BuzzFeed 与 BBC 分别在Facebook 与 Twitter 上拔得头筹。

4)日常活动

吃穿住行娱乐,我们的日常生活受到的来互联网的影响也愈发明显。

在美国,已经有超过 47% 的在线交易免收运费。当地当日送达将成为下一个爆发点,Amazon 与 Google 等巨头早已开始在一些城市进行试验。

音乐流媒体增长迅速,2013 年美国市场增幅达到了 32%,以 Spotify 为首的在线音乐服务开始扩张。同时,数字音乐的销售也迎来了 -6% 的首次年同比下降,美国市场的全年销量为 13 亿首。

5)数字钱币

500 万个比特币钱包证明了数字钱币仍有很大用户基础

6)垂直产业

内容+社区+商业模式=互联网铁三角

连接房主,设计师和施工方的 Houzz 所打造的铁三角:图片+专业人士与消费者组成的社区+解决方案等产品

* *

所有重新想像的核心:移动设备与众多传感器带来的数据井喷

移动终端在数量和精密程度上的进步,与移动网络的飞速发展,使我们能够收集到空前数量的用户数据。即时分享和沟通使得探索用户行为模式变得可能,但涉及到隐私的问题也开始变得尖锐。同理,通过分析海量数据,让我们得以解决过去不曾有可能解决的难题,但个人利益如何得到保障也是该被重视的问题。

1)大数据趋势:

可上传/可共享/可寻迹的实时数据快速增加。在几个主流平台上,人们每天上传分享超过 18 亿张照片,更多新服务的崛起使这个数字继续增长。

可上传/可共享/不可寻迹的数据

“爆炸中” 的信息总量,2016 年将达到 13 ZB

成本下降为各式感应器在移动设备中的大量采用提供了条件,几年前被 iPhone 内置罗盘惊艳到的消费者现在则开始尝试用手机测心率了。

去年全球数码设备中所使用的 MEMS 传感器数量达到了 80 亿个,增幅 32%,其中手机占据了绝大多数,其次则是平板电脑。

2)计算成本的下降与 “云” 的兴起

技术发展,使计算处理成本、存储成本、带宽成本以及移动手机等设备的制造成本飞速下降。基础设施的完善也使得云计算开始变得唾手可得。

3)新交互界面有利于挖掘数据

Mary Meeker 认为新设备加新交互形式将让那些原本与我们生活息息相关的事物成为有巨大潜力的商业机遇,这其中的根本原因在于数据的挖掘与利用变得简单直接。

人们吃穿住行的模式都难以避免的被重塑。

4)数据挖掘与分析的重要性

据 IDC 统计,通用数据中大约有 34% 的信息具备研究价值,它们来源于社交媒体、图像音频以及数据处理等各式源头。只有 7% 的数据被做了标记,很大一部分来自目前碎片化严重的物联网,这类信息具备极高的分析价值。而在所有通用数据中,只有 1% 被分析过。

数据获取固然重要,但在缺乏分析工具的情况下数据将变得毫无意义。一些依靠解读数据提供解决方案的新型服务开始出现。大数据解决大问题的趋势开始出现。

等待着大展身手的多屏分化和在线视频

如Netflix 创始人和CEO Reed Hastings 所言,电视的未来有四个趋势。

1)屏幕分化

2013 移动设备(智能手机+平板电脑)出货量已经是电视与 PC 总和的 4~5 倍,这一切只花了十年时间。随着新兴市场移动设备的渗透率不断升高,这一趋势还将继续下去。

智能手机是大部分地区消费者最常用的电子媒介,其次则是电视或者 PC。越来越多的用户会在同一时间使用一种以上设备,比如在看电视时使用智能手机或者平板电脑查看与节目内容有关的信息或与朋友进行互动,这一人群的比例已经在过去两年内翻了三番,达到了 84%。

这不仅意味着屏幕数量的增加,用户的参与度也被提高到了全新的高度。根据统计,在 2012 年奥运会期间,用户在电视、PC、手机以及平板四种屏幕上所花的时间要比单一花在电视上的时间多出一倍。

现在他们可以通过更少的时间获取更多的内容。

2)传统遥控方式的消失

个性化搜索引擎将取代传统电视遥控器,智能电视或者智能电视解决方案将像智能手机颠覆功能手机一样取代传统电视服务。

电视盒子作为一种智能电视解决方案已经收获了数千万用户,新晋者如 Chromecast(多屏播放)和 Fire TV(语音搜索)制定了新的行业标准。

智能电视出货量占电视机整体出货量的份额正在不断攀升,2013 年的数字为 39%,但使用率却处于 10% 以下。

3)频道被应用所取代

电视频道开始变成一个个点播程序,传统电视节目单演化成应用内搜索功能。尽管如此,大部分电视点播应用却依旧需要用户先成为传统电视网络的付费用户,也就说点播应用只是方便用户观看电视频道的工具而尚未成为替代品。

但是以 YouTube 为代表的新内容传播渠道也在高速发展,比起传统渠道,这些网络的能将内容传播到更广的人群。

新的媒介造就了新的内容制作群体,他们可以是大型工作室,更可以是和你我一样的普通个体。消费者对长视频保持热度的同时也越来越欢迎高质量的短视频。YouTube 排名前十的视频的平均长度为 7 分钟。

不仅内容制作者可以从中收益,在线广告的精准度与有效性也得到了大幅提高。

消费者喜欢制作精良的广告短片。

作为传播渠道的 YouTube 也可以通过用户行为数据分析反过来为后者提供更好的用户体验,为厂商提供更好的解决方案。

对于传统电视来说,观看节目的是观众,而在互联网电视的世界,观看者变成了粉丝。粉丝具有更高的参与度和忠诚度,同时其行为模式也更加容易预测。节目结束时,观众会离场,而粉丝会进行评论、收藏、分享。

受众的共性催生新的媒体内容,比如游戏直播。

社交属性带来广告效果的明显提升。在有 Twitter 参与的情况下,电视广告的给品牌的好感度与消费者购买意图的影响程度均能得到提升。

消费者开始根据个人喜好选择内容,以 Netflix 为代表的服务为不同用户设立不同的档案。年轻用户喜爱在网上观看视频,千禧年一代在花 34% 的时间观看在线视频。

4)互联网电视取代传统电视

互联网电视在使用场景、观看内容与播放设备等方面提供个性化选择,这与用户需求相符。

移动视频消耗量占在线视频总量的比例持续上升,2013 年已达 22%。

* *

中国的崛起

中国拥有 5 亿移动互联网用户,占全体互联网用户的 80%,这是全球最高的比例。

根据月独立访问量,2013 年 1 月,全球十大互联网服务中有 9 个来自美国,腾讯是唯一一个非美国公司。而 2014 年 3 月的数据显示,全球十大互联网服务中来自中国的公司已经增加到 4 家,按照排名,它们分别是阿里巴巴,百度,腾讯和搜狐。其余 6 家均来自美国。

中国公司引领移商务创新。微信在服务集成以及个人通讯、客户关系管理以及交易支付等环节的融合方面所做的创新被当作了例子。

Google、Facebook、腾讯和阿里巴巴四家科技公司在 2012-2014 期间共在并购及投资方面注入了 500 亿美元。Facebook 的写给 WhatsApp 的 190 亿美元支票无人能敌。

from:http://www.36kr.com/p/212449.html

Best Practices for Speeding Up Your Web Site

The Exceptional Performance team has identified a number of best practices for making web pages fast. The list includes 35 best practices divided into 7 categories.

Minimize HTTP Requests

tag: content

80% of the end-user response time is spent on the front-end. Most of this time is tied up in downloading all the components in the page: images, stylesheets, scripts, Flash, etc. Reducing the number of components in turn reduces the number of HTTP requests required to render the page. This is the key to faster pages.

One way to reduce the number of components in the page is to simplify the page’s design. But is there a way to build pages with richer content while also achieving fast response times? Here are some techniques for reducing the number of HTTP requests, while still supporting rich page designs.

Combined files are a way to reduce the number of HTTP requests by combining all scripts into a single script, and similarly combining all CSS into a single stylesheet. Combining files is more challenging when the scripts and stylesheets vary from page to page, but making this part of your release process improves response times.

CSS Sprites are the preferred method for reducing the number of image requests. Combine your background images into a single image and use the CSS background-image and background-position properties to display the desired image segment.

Image maps combine multiple images into a single image. The overall size is about the same, but reducing the number of HTTP requests speeds up the page. Image maps only work if the images are contiguous in the page, such as a navigation bar. Defining the coordinates of image maps can be tedious and error prone. Using image maps for navigation is not accessible too, so it’s not recommended.

Inline images use the data: URL scheme to embed the image data in the actual page. This can increase the size of your HTML document. Combining inline images into your (cached) stylesheets is a way to reduce HTTP requests and avoid increasing the size of your pages. Inline images are not yet supported across all major browsers.

Reducing the number of HTTP requests in your page is the place to start. This is the most important guideline for improving performance for first time visitors. As described in Tenni Theurer’s blog post Browser Cache Usage – Exposed!, 40-60% of daily visitors to your site come in with an empty cache. Making your page fast for these first time visitors is key to a better user experience.

top | discuss this rule

Use a Content Delivery Network

tag: server

The user’s proximity to your web server has an impact on response times. Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user’s perspective. But where should you start?

As a first step to implementing geographically dispersed content, don’t attempt to redesign your web application to work in a distributed architecture. Depending on the application, changing the architecture could include daunting tasks such as synchronizing session state and replicating database transactions across server locations. Attempts to reduce the distance between users and your content could be delayed by, or never pass, this application architecture step.

Remember that 80-90% of the end-user response time is spent downloading all the components in the page: images, stylesheets, scripts, Flash, etc. This is the Performance Golden Rule. Rather than starting with the difficult task of redesigning your application architecture, it’s better to first disperse your static content. This not only achieves a bigger reduction in response times, but it’s easier thanks to content delivery networks.

A content delivery network (CDN) is a collection of web servers distributed across multiple locations to deliver content more efficiently to users. The server selected for delivering content to a specific user is typically based on a measure of network proximity. For example, the server with the fewest network hops or the server with the quickest response time is chosen.

Some large Internet companies own their own CDN, but it’s cost-effective to use a CDN service provider, such as Akamai Technologies, EdgeCast, or level3. For start-up companies and private web sites, the cost of a CDN service can be prohibitive, but as your target audience grows larger and becomes more global, a CDN is necessary to achieve fast response times. At Yahoo!, properties that moved static content off their application web servers to a CDN (both 3rd party as mentioned above as well as Yahoo’s own CDN) improved end-user response times by 20% or more. Switching to a CDN is a relatively easy code change that will dramatically improve the speed of your web site.

top | discuss this rule

Add an Expires or a Cache-Control Header

tag: server

There are two aspects to this rule:

  • For static components: implement “Never expire” policy by setting far future Expires header
  • For dynamic components: use an appropriate Cache-Control header to help the browser with conditional requests

 

Web page designs are getting richer and richer, which means more scripts, stylesheets, images, and Flash in the page. A first-time visitor to your page may have to make several HTTP requests, but by using the Expires header you make those components cacheable. This avoids unnecessary HTTP requests on subsequent page views. Expires headers are most often used with images, but they should be used on all components including scripts, stylesheets, and Flash components.

Browsers (and proxies) use a cache to reduce the number and size of HTTP requests, making web pages load faster. A web server uses the Expires header in the HTTP response to tell the client how long a component can be cached. This is a far future Expires header, telling the browser that this response won’t be stale until April 15, 2010.

      Expires: Thu, 15 Apr 2010 20:00:00 GMT

 

If your server is Apache, use the ExpiresDefault directive to set an expiration date relative to the current date. This example of the ExpiresDefault directive sets the Expires date 10 years out from the time of the request.

      ExpiresDefault "access plus 10 years"

 

Keep in mind, if you use a far future Expires header you have to change the component’s filename whenever the component changes. At Yahoo! we often make this step part of the build process: a version number is embedded in the component’s filename, for example, yahoo_2.0.6.js.

Using a far future Expires header affects page views only after a user has already visited your site. It has no effect on the number of HTTP requests when a user visits your site for the first time and the browser’s cache is empty. Therefore the impact of this performance improvement depends on how often users hit your pages with a primed cache. (A “primed cache” already contains all of the components in the page.) We measured this at Yahoo! and found the number of page views with a primed cache is 75-85%. By using a far future Expires header, you increase the number of components that are cached by the browser and re-used on subsequent page views without sending a single byte over the user’s Internet connection.

top | discuss this rule

Gzip Components

tag: server

The time it takes to transfer an HTTP request and response across the network can be significantly reduced by decisions made by front-end engineers. It’s true that the end-user’s bandwidth speed, Internet service provider, proximity to peering exchange points, etc. are beyond the control of the development team. But there are other variables that affect response times. Compression reduces response times by reducing the size of the HTTP response.

Starting with HTTP/1.1, web clients indicate support for compression with the Accept-Encoding header in the HTTP request.

      Accept-Encoding: gzip, deflate

 

If the web server sees this header in the request, it may compress the response using one of the methods listed by the client. The web server notifies the web client of this via the Content-Encoding header in the response.

      Content-Encoding: gzip

 

Gzip is the most popular and effective compression method at this time. It was developed by the GNU project and standardized by RFC 1952. The only other compression format you’re likely to see is deflate, but it’s less effective and less popular.

Gzipping generally reduces the response size by about 70%. Approximately 90% of today’s Internet traffic travels through browsers that claim to support gzip. If you use Apache, the module configuring gzip depends on your version: Apache 1.3 uses mod_gzip while Apache 2.x uses mod_deflate.

There are known issues with browsers and proxies that may cause a mismatch in what the browser expects and what it receives with regard to compressed content. Fortunately, these edge cases are dwindling as the use of older browsers drops off. The Apache modules help out by adding appropriate Vary response headers automatically.

Servers choose what to gzip based on file type, but are typically too limited in what they decide to compress. Most web sites gzip their HTML documents. It’s also worthwhile to gzip your scripts and stylesheets, but many web sites miss this opportunity. In fact, it’s worthwhile to compress any text response including XML and JSON. Image and PDF files should not be gzipped because they are already compressed. Trying to gzip them not only wastes CPU but can potentially increase file sizes.

Gzipping as many file types as possible is an easy way to reduce page weight and accelerate the user experience.

top | discuss this rule

Put Stylesheets at the Top

tag: css

While researching performance at Yahoo!, we discovered that moving stylesheets to the document HEAD makes pages appear to be loading faster. This is because putting stylesheets in the HEAD allows the page to render progressively.

Front-end engineers that care about performance want a page to load progressively; that is, we want the browser to display whatever content it has as soon as possible. This is especially important for pages with a lot of content and for users on slower Internet connections. The importance of giving users visual feedback, such as progress indicators, has been well researched and documented. In our case the HTML page is the progress indicator! When the browser loads the page progressively the header, the navigation bar, the logo at the top, etc. all serve as visual feedback for the user who is waiting for the page. This improves the overall user experience.

The problem with putting stylesheets near the bottom of the document is that it prohibits progressive rendering in many browsers, including Internet Explorer. These browsers block rendering to avoid having to redraw elements of the page if their styles change. The user is stuck viewing a blank white page.

The HTML specification clearly states that stylesheets are to be included in the HEAD of the page: “Unlike A, [LINK] may only appear in the HEAD section of a document, although it may appear any number of times.” Neither of the alternatives, the blank white screen or flash of unstyled content, are worth the risk. The optimal solution is to follow the HTML specification and load your stylesheets in the document HEAD.

top | discuss this rule

Put Scripts at the Bottom

tag: javascript

The problem caused by scripts is that they block parallel downloads. The HTTP/1.1 specification suggests that browsers download no more than two components in parallel per hostname. If you serve your images from multiple hostnames, you can get more than two downloads to occur in parallel. While a script is downloading, however, the browser won’t start any other downloads, even on different hostnames.

In some situations it’s not easy to move scripts to the bottom. If, for example, the script uses document.write to insert part of the page’s content, it can’t be moved lower in the page. There might also be scoping issues. In many cases, there are ways to workaround these situations.

An alternative suggestion that often comes up is to use deferred scripts. The DEFER attribute indicates that the script does not contain document.write, and is a clue to browsers that they can continue rendering. Unfortunately, Firefox doesn’t support the DEFER attribute. In Internet Explorer, the script may be deferred, but not as much as desired. If a script can be deferred, it can also be moved to the bottom of the page. That will make your web pages load faster.

top | discuss this rule

Avoid CSS Expressions

tag: css

CSS expressions are a powerful (and dangerous) way to set CSS properties dynamically. They were supported in Internet Explorer starting with version 5, but were deprecated starting with IE8. As an example, the background color could be set to alternate every hour using CSS expressions:

      background-color: expression( (new Date()).getHours()%2 ? "#B8D4FF" : "#F08A00" );

 

As shown here, the expression method accepts a JavaScript expression. The CSS property is set to the result of evaluating the JavaScript expression. The expression method is ignored by other browsers, so it is useful for setting properties in Internet Explorer needed to create a consistent experience across browsers.

The problem with expressions is that they are evaluated more frequently than most people expect. Not only are they evaluated when the page is rendered and resized, but also when the page is scrolled and even when the user moves the mouse over the page. Adding a counter to the CSS expression allows us to keep track of when and how often a CSS expression is evaluated. Moving the mouse around the page can easily generate more than 10,000 evaluations.

One way to reduce the number of times your CSS expression is evaluated is to use one-time expressions, where the first time the expression is evaluated it sets the style property to an explicit value, which replaces the CSS expression. If the style property must be set dynamically throughout the life of the page, using event handlers instead of CSS expressions is an alternative approach. If you must use CSS expressions, remember that they may be evaluated thousands of times and could affect the performance of your page.

top | discuss this rule

Make JavaScript and CSS External

tag: javascript, css

Many of these performance rules deal with how external components are managed. However, before these considerations arise you should ask a more basic question: Should JavaScript and CSS be contained in external files, or inlined in the page itself?

Using external files in the real world generally produces faster pages because the JavaScript and CSS files are cached by the browser. JavaScript and CSS that are inlined in HTML documents get downloaded every time the HTML document is requested. This reduces the number of HTTP requests that are needed, but increases the size of the HTML document. On the other hand, if the JavaScript and CSS are in external files cached by the browser, the size of the HTML document is reduced without increasing the number of HTTP requests.

The key factor, then, is the frequency with which external JavaScript and CSS components are cached relative to the number of HTML documents requested. This factor, although difficult to quantify, can be gauged using various metrics. If users on your site have multiple page views per session and many of your pages re-use the same scripts and stylesheets, there is a greater potential benefit from cached external files.

Many web sites fall in the middle of these metrics. For these sites, the best solution generally is to deploy the JavaScript and CSS as external files. The only exception where inlining is preferable is with home pages, such as Yahoo!’s front page and My Yahoo!. Home pages that have few (perhaps only one) page view per session may find that inlining JavaScript and CSS results in faster end-user response times.

For front pages that are typically the first of many page views, there are techniques that leverage the reduction of HTTP requests that inlining provides, as well as the caching benefits achieved through using external files. One such technique is to inline JavaScript and CSS in the front page, but dynamically download the external files after the page has finished loading. Subsequent pages would reference the external files that should already be in the browser’s cache.

top | discuss this rule

Reduce DNS Lookups

tag: content

The Domain Name System (DNS) maps hostnames to IP addresses, just as phonebooks map people’s names to their phone numbers. When you type www.yahoo.com into your browser, a DNS resolver contacted by the browser returns that server’s IP address. DNS has a cost. It typically takes 20-120 milliseconds for DNS to lookup the IP address for a given hostname. The browser can’t download anything from this hostname until the DNS lookup is completed.

DNS lookups are cached for better performance. This caching can occur on a special caching server, maintained by the user’s ISP or local area network, but there is also caching that occurs on the individual user’s computer. The DNS information remains in the operating system’s DNS cache (the “DNS Client service” on Microsoft Windows). Most browsers have their own caches, separate from the operating system’s cache. As long as the browser keeps a DNS record in its own cache, it doesn’t bother the operating system with a request for the record.

Internet Explorer caches DNS lookups for 30 minutes by default, as specified by the DnsCacheTimeout registry setting. Firefox caches DNS lookups for 1 minute, controlled by the network.dnsCacheExpiration configuration setting. (Fasterfox changes this to 1 hour.)

When the client’s DNS cache is empty (for both the browser and the operating system), the number of DNS lookups is equal to the number of unique hostnames in the web page. This includes the hostnames used in the page’s URL, images, script files, stylesheets, Flash objects, etc. Reducing the number of unique hostnames reduces the number of DNS lookups.

Reducing the number of unique hostnames has the potential to reduce the amount of parallel downloading that takes place in the page. Avoiding DNS lookups cuts response times, but reducing parallel downloads may increase response times. My guideline is to split these components across at least two but no more than four hostnames. This results in a good compromise between reducing DNS lookups and allowing a high degree of parallel downloads.

top | discuss this rule

Minify JavaScript and CSS

tag: javascript, css

Minification is the practice of removing unnecessary characters from code to reduce its size thereby improving load times. When code is minified all comments are removed, as well as unneeded white space characters (space, newline, and tab). In the case of JavaScript, this improves response time performance because the size of the downloaded file is reduced. Two popular tools for minifying JavaScript code are JSMin and YUI Compressor. The YUI compressor can also minify CSS.

Obfuscation is an alternative optimization that can be applied to source code. It’s more complex than minification and thus more likely to generate bugs as a result of the obfuscation step itself. In a survey of ten top U.S. web sites, minification achieved a 21% size reduction versus 25% for obfuscation. Although obfuscation has a higher size reduction, minifying JavaScript is less risky.

In addition to minifying external scripts and styles, inlined <script> and <style> blocks can and should also be minified. Even if you gzip your scripts and styles, minifying them will still reduce the size by 5% or more. As the use and size of JavaScript and CSS increases, so will the savings gained by minifying your code.

top | discuss this rule

Avoid Redirects

tag: content

Redirects are accomplished using the 301 and 302 status codes. Here’s an example of the HTTP headers in a 301 response:

      HTTP/1.1 301 Moved Permanently
      Location: http://example.com/newuri
      Content-Type: text/html

 

The browser automatically takes the user to the URL specified in the Location field. All the information necessary for a redirect is in the headers. The body of the response is typically empty. Despite their names, neither a 301 nor a 302 response is cached in practice unless additional headers, such as Expires or Cache-Control, indicate it should be. The meta refresh tag and JavaScript are other ways to direct users to a different URL, but if you must do a redirect, the preferred technique is to use the standard 3xx HTTP status codes, primarily to ensure the back button works correctly.

The main thing to remember is that redirects slow down the user experience. Inserting a redirect between the user and the HTML document delays everything in the page since nothing in the page can be rendered and no components can start being downloaded until the HTML document has arrived.

One of the most wasteful redirects happens frequently and web developers are generally not aware of it. It occurs when a trailing slash (/) is missing from a URL that should otherwise have one. For example, going to http://astrology.yahoo.com/astrology results in a 301 response containing a redirect to http://astrology.yahoo.com/astrology/ (notice the added trailing slash). This is fixed in Apache by using Alias or mod_rewrite, or the DirectorySlash directive if you’re using Apache handlers.

Connecting an old web site to a new one is another common use for redirects. Others include connecting different parts of a website and directing the user based on certain conditions (type of browser, type of user account, etc.). Using a redirect to connect two web sites is simple and requires little additional coding. Although using redirects in these situations reduces the complexity for developers, it degrades the user experience. Alternatives for this use of redirects include using Alias and mod_rewrite if the two code paths are hosted on the same server. If a domain name change is the cause of using redirects, an alternative is to create a CNAME (a DNS record that creates an alias pointing from one domain name to another) in combination with Alias or mod_rewrite.

top | discuss this rule

Remove Duplicate Scripts

tag: javascript

It hurts performance to include the same JavaScript file twice in one page. This isn’t as unusual as you might think. A review of the ten top U.S. web sites shows that two of them contain a duplicated script. Two main factors increase the odds of a script being duplicated in a single web page: team size and number of scripts. When it does happen, duplicate scripts hurt performance by creating unnecessary HTTP requests and wasted JavaScript execution.

Unnecessary HTTP requests happen in Internet Explorer, but not in Firefox. In Internet Explorer, if an external script is included twice and is not cacheable, it generates two HTTP requests during page loading. Even if the script is cacheable, extra HTTP requests occur when the user reloads the page.

In addition to generating wasteful HTTP requests, time is wasted evaluating the script multiple times. This redundant JavaScript execution happens in both Firefox and Internet Explorer, regardless of whether the script is cacheable.

One way to avoid accidentally including the same script twice is to implement a script management module in your templating system. The typical way to include a script is to use the SCRIPT tag in your HTML page.

      <script type="text/javascript" src="menu_1.0.17.js"></script>

 

An alternative in PHP would be to create a function called insertScript.

      <?php insertScript("menu.js") ?>

 

In addition to preventing the same script from being inserted multiple times, this function could handle other issues with scripts, such as dependency checking and adding version numbers to script filenames to support far future Expires headers.

top | discuss this rule

Configure ETags

tag: server

Entity tags (ETags) are a mechanism that web servers and browsers use to determine whether the component in the browser’s cache matches the one on the origin server. (An “entity” is another word a “component”: images, scripts, stylesheets, etc.) ETags were added to provide a mechanism for validating entities that is more flexible than the last-modified date. An ETag is a string that uniquely identifies a specific version of a component. The only format constraints are that the string be quoted. The origin server specifies the component’s ETag using the ETag response header.

      HTTP/1.1 200 OK
      Last-Modified: Tue, 12 Dec 2006 03:03:59 GMT
      ETag: "10c24bc-4ab-457e1c1f"
      Content-Length: 12195

 

Later, if the browser has to validate a component, it uses the If-None-Matchheader to pass the ETag back to the origin server. If the ETags match, a 304 status code is returned reducing the response by 12195 bytes for this example.

      GET /i/yahoo.gif HTTP/1.1
      Host: us.yimg.com
      If-Modified-Since: Tue, 12 Dec 2006 03:03:59 GMT
      If-None-Match: "10c24bc-4ab-457e1c1f"
      HTTP/1.1 304 Not Modified

 

The problem with ETags is that they typically are constructed using attributes that make them unique to a specific server hosting a site. ETags won’t match when a browser gets the original component from one server and later tries to validate that component on a different server, a situation that is all too common on Web sites that use a cluster of servers to handle requests. By default, both Apache and IIS embed data in the ETag that dramatically reduces the odds of the validity test succeeding on web sites with multiple servers.

The ETag format for Apache 1.3 and 2.x is inode-size-timestamp. Although a given file may reside in the same directory across multiple servers, and have the same file size, permissions, timestamp, etc., its inode is different from one server to the next.

IIS 5.0 and 6.0 have a similar issue with ETags. The format for ETags on IIS is Filetimestamp:ChangeNumber. A ChangeNumber is a counter used to track configuration changes to IIS. It’s unlikely that the ChangeNumberis the same across all IIS servers behind a web site.

The end result is ETags generated by Apache and IIS for the exact same component won’t match from one server to another. If the ETags don’t match, the user doesn’t receive the small, fast 304 response that ETags were designed for; instead, they’ll get a normal 200 response along with all the data for the component. If you host your web site on just one server, this isn’t a problem. But if you have multiple servers hosting your web site, and you’re using Apache or IIS with the default ETag configuration, your users are getting slower pages, your servers have a higher load, you’re consuming greater bandwidth, and proxies aren’t caching your content efficiently. Even if your components have a far future Expiresheader, a conditional GET request is still made whenever the user hits Reload or Refresh.

If you’re not taking advantage of the flexible validation model that ETags provide, it’s better to just remove the ETag altogether. The Last-Modified header validates based on the component’s timestamp. And removing the ETag reduces the size of the HTTP headers in both the response and subsequent requests. This Microsoft Support articledescribes how to remove ETags. In Apache, this is done by simply adding the following line to your Apache configuration file:

      FileETag none

 

top | discuss this rule

Make Ajax Cacheable

tag: content

One of the cited benefits of Ajax is that it provides instantaneous feedback to the user because it requests information asynchronously from the backend web server. However, using Ajax is no guarantee that the user won’t be twiddling his thumbs waiting for those asynchronous JavaScript and XML responses to return. In many applications, whether or not the user is kept waiting depends on how Ajax is used. For example, in a web-based email client the user will be kept waiting for the results of an Ajax request to find all the email messages that match their search criteria. It’s important to remember that “asynchronous” does not imply “instantaneous”.

To improve performance, it’s important to optimize these Ajax responses. The most important way to improve the performance of Ajax is to make the responses cacheable, as discussed in Add an Expires or a Cache-Control Header. Some of the other rules also apply to Ajax:

 

 

Let’s look at an example. A Web 2.0 email client might use Ajax to download the user’s address book for autocompletion. If the user hasn’t modified her address book since the last time she used the email web app, the previous address book response could be read from cache if that Ajax response was made cacheable with a future Expires or Cache-Control header. The browser must be informed when to use a previously cached address book response versus requesting a new one. This could be done by adding a timestamp to the address book Ajax URL indicating the last time the user modified her address book, for example, &t=1190241612. If the address book hasn’t been modified since the last download, the timestamp will be the same and the address book will be read from the browser’s cache eliminating an extra HTTP roundtrip. If the user has modified her address book, the timestamp ensures the new URL doesn’t match the cached response, and the browser will request the updated address book entries.

Even though your Ajax responses are created dynamically, and might only be applicable to a single user, they can still be cached. Doing so will make your Web 2.0 apps faster.

top | discuss this rule

Flush the Buffer Early

tag: server

When users request a page, it can take anywhere from 200 to 500ms for the backend server to stitch together the HTML page. During this time, the browser is idle as it waits for the data to arrive. In PHP you have the function flush(). It allows you to send your partially ready HTML response to the browser so that the browser can start fetching components while your backend is busy with the rest of the HTML page. The benefit is mainly seen on busy backends or light frontends.

A good place to consider flushing is right after the HEAD because the HTML for the head is usually easier to produce and it allows you to include any CSS and JavaScript files for the browser to start fetching in parallel while the backend is still processing.

Example:

      ... <!-- css, js -->
    </head>
    <?php flush(); ?>
    <body>
      ... <!-- content -->

 

Yahoo! search pioneered research and real user testing to prove the benefits of using this technique.

top

Use GET for AJAX Requests

tag: server

The Yahoo! Mail team found that when using XMLHttpRequest, POST is implemented in the browsers as a two-step process: sending the headers first, then sending data. So it’s best to use GET, which only takes one TCP packet to send (unless you have a lot of cookies). The maximum URL length in IE is 2K, so if you send more than 2K data you might not be able to use GET.

An interesting side affect is that POST without actually posting any data behaves like GET. Based on the HTTP specs, GET is meant for retrieving information, so it makes sense (semantically) to use GET when you’re only requesting data, as opposed to sending data to be stored server-side.

 

top

Post-load Components

tag: content

You can take a closer look at your page and ask yourself: “What’s absolutely required in order to render the page initially?”. The rest of the content and components can wait.

JavaScript is an ideal candidate for splitting before and after the onload event. For example if you have JavaScript code and libraries that do drag and drop and animations, those can wait, because dragging elements on the page comes after the initial rendering. Other places to look for candidates for post-loading include hidden content (content that appears after a user action) and images below the fold.

Tools to help you out in your effort: YUI Image Loader allows you to delay images below the fold and the YUI Get utility is an easy way to include JS and CSS on the fly. For an example in the wild take a look at Yahoo! Home Page with Firebug’s Net Panel turned on.

It’s good when the performance goals are inline with other web development best practices. In this case, the idea of progressive enhancement tells us that JavaScript, when supported, can improve the user experience but you have to make sure the page works even without JavaScript. So after you’ve made sure the page works fine, you can enhance it with some post-loaded scripts that give you more bells and whistles such as drag and drop and animations.

top

Preload Components

tag: content

Preload may look like the opposite of post-load, but it actually has a different goal. By preloading components you can take advantage of the time the browser is idle and request components (like images, styles and scripts) you’ll need in the future. This way when the user visits the next page, you could have most of the components already in the cache and your page will load much faster for the user.

There are actually several types of preloading:

  • Unconditional preload – as soon as onload fires, you go ahead and fetch some extra components. Check google.com for an example of how a sprite image is requested onload. This sprite image is not needed on the google.com homepage, but it is needed on the consecutive search result page.
  • Conditional preload – based on a user action you make an educated guess where the user is headed next and preload accordingly. On search.yahoo.com you can see how some extra components are requested after you start typing in the input box.
  • Anticipated preload – preload in advance before launching a redesign. It often happens after a redesign that you hear: “The new site is cool, but it’s slower than before”. Part of the problem could be that the users were visiting your old site with a full cache, but the new one is always an empty cache experience. You can mitigate this side effect by preloading some components before you even launched the redesign. Your old site can use the time the browser is idle and request images and scripts that will be used by the new site

top

Reduce the Number of DOM Elements

tag: content

A complex page means more bytes to download and it also means slower DOM access in JavaScript. It makes a difference if you loop through 500 or 5000 DOM elements on the page when you want to add an event handler for example.

A high number of DOM elements can be a symptom that there’s something that should be improved with the markup of the page without necessarily removing content. Are you using nested tables for layout purposes? Are you throwing in more <div>s only to fix layout issues? Maybe there’s a better and more semantically correct way to do your markup.

A great help with layouts are the YUI CSS utilities: grids.css can help you with the overall layout, fonts.css and reset.css can help you strip away the browser’s defaults formatting. This is a chance to start fresh and think about your markup, for example use <div>s only when it makes sense semantically, and not because it renders a new line.

The number of DOM elements is easy to test, just type in Firebug’s console:
document.getElementsByTagName('*').length

And how many DOM elements are too many? Check other similar pages that have good markup. For example the Yahoo! Home Page is a pretty busy page and still under 700 elements (HTML tags).

top

Split Components Across Domains

tag: content

Splitting components allows you to maximize parallel downloads. Make sure you’re using not more than 2-4 domains because of the DNS lookup penalty. For example, you can host your HTML and dynamic content on www.example.org and split static components between static1.example.org and static2.example.org

For more information check “Maximizing Parallel Downloads in the Carpool Lane” by Tenni Theurer and Patty Chi.

top

Minimize the Number of iframes

tag: content

Iframes allow an HTML document to be inserted in the parent document. It’s important to understand how iframes work so they can be used effectively.

<iframe> pros:

  • Helps with slow third-party content like badges and ads
  • Security sandbox
  • Download scripts in parallel

<iframe> cons:

  • Costly even if blank
  • Blocks page onload
  • Non-semantic

top

No 404s

tag: content

HTTP requests are expensive so making an HTTP request and getting a useless response (i.e. 404 Not Found) is totally unnecessary and will slow down the user experience without any benefit.

Some sites have helpful 404s “Did you mean X?”, which is great for the user experience but also wastes server resources (like database, etc). Particularly bad is when the link to an external JavaScript is wrong and the result is a 404. First, this download will block parallel downloads. Next the browser may try to parse the 404 response body as if it were JavaScript code, trying to find something usable in it.

top

tag: cookie

HTTP cookies are used for a variety of reasons such as authentication and personalization. Information about cookies is exchanged in the HTTP headers between web servers and browsers. It’s important to keep the size of cookies as low as possible to minimize the impact on the user’s response time.

For more information check “When the Cookie Crumbles”by Tenni Theurer and Patty Chi. The take-home of this research:

  • Eliminate unnecessary cookies
  • Keep cookie sizes as low as possible to minimize the impact on the user response time
  • Be mindful of setting cookies at the appropriate domain level so other sub-domains are not affected
  • Set an Expires date appropriately. An earlier Expires date or none removes the cookie sooner, improving the user response time

top

tag: cookie

When the browser makes a request for a static image and sends cookies together with the request, the server doesn’t have any use for those cookies. So they only create network traffic for no good reason. You should make sure static components are requested with cookie-free requests. Create a subdomain and host all your static components there.

If your domain is www.example.org, you can host your static components on static.example.org. However, if you’ve already set cookies on the top-level domain example.org as opposed to www.example.org, then all the requests to static.example.org will include those cookies. In this case, you can buy a whole new domain, host your static components there, and keep this domain cookie-free. Yahoo! uses yimg.com, YouTube uses ytimg.com, Amazon uses images-amazon.com and so on.

Another benefit of hosting static components on a cookie-free domain is that some proxies might refuse to cache the components that are requested with cookies. On a related note, if you wonder if you should use example.org or www.example.org for your home page, consider the cookie impact. Omitting www leaves you no choice but to write cookies to *.example.org, so for performance reasons it’s best to use the www subdomain and write the cookies to that subdomain.

top

Minimize DOM Access

tag: javascript

Accessing DOM elements with JavaScript is slow so in order to have a more responsive page, you should:

  • Cache references to accessed elements
  • Update nodes “offline” and then add them to the tree
  • Avoid fixing layout with JavaScript

For more information check the YUI theatre’s “High Performance Ajax Applications” by Julien Lecomte.

top

Develop Smart Event Handlers

tag: javascript

Sometimes pages feel less responsive because of too many event handlers attached to different elements of the DOM tree which are then executed too often. That’s why using event delegation is a good approach. If you have 10 buttons inside a div, attach only one event handler to the div wrapper, instead of one handler for each button. Events bubble up so you’ll be able to catch the event and figure out which button it originated from.

You also don’t need to wait for the onload event in order to start doing something with the DOM tree. Often all you need is the element you want to access to be available in the tree. You don’t have to wait for all images to be downloaded. DOMContentLoaded is the event you might consider using instead of onload, but until it’s available in all browsers, you can use the YUI Event utility, which has an onAvailable method.

For more information check the YUI theatre’s “High Performance Ajax Applications” by Julien Lecomte.

top

tag: css

One of the previous best practices states that CSS should be at the top in order to allow for progressive rendering.

In IE @import behaves the same as using <link> at the bottom of the page, so it’s best not to use it.

top

Avoid Filters

tag: css

The IE-proprietary AlphaImageLoader filter aims to fix a problem with semi-transparent true color PNGs in IE versions < 7. The problem with this filter is that it blocks rendering and freezes the browser while the image is being downloaded. It also increases memory consumption and is applied per element, not per image, so the problem is multiplied.

The best approach is to avoid AlphaImageLoader completely and use gracefully degrading PNG8 instead, which are fine in IE. If you absolutely need AlphaImageLoader, use the underscore hack _filter as to not penalize your IE7+ users.

top

Optimize Images

tag: images

After a designer is done with creating the images for your web page, there are still some things you can try before you FTP those images to your web server.

  • You can check the GIFs and see if they are using a palette size corresponding to the number of colors in the image. Using imagemagick it’s easy to check using
    identify -verbose image.gif
    When you see an image using 4 colors and a 256 color “slots” in the palette, there is room for improvement.
  • Try converting GIFs to PNGs and see if there is a saving. More often than not, there is. Developers often hesitate to use PNGs due to the limited support in browsers, but this is now a thing of the past. The only real problem is alpha-transparency in true color PNGs, but then again, GIFs are not true color and don’t support variable transparency either. So anything a GIF can do, a palette PNG (PNG8) can do too (except for animations). This simple imagemagick command results in totally safe-to-use PNGs:
    convert image.gif image.png
    “All we are saying is: Give PiNG a Chance!”
  • Run pngcrush (or any other PNG optimizer tool) on all your PNGs. Example:
    pngcrush image.png -rem alla -reduce -brute result.png
  • Run jpegtran on all your JPEGs. This tool does lossless JPEG operations such as rotation and can also be used to optimize and remove comments and other useless information (such as EXIF information) from your images.
    jpegtran -copy none -optimize -perfect src.jpg dest.jpg

top

Optimize CSS Sprites

tag: images

  • Arranging the images in the sprite horizontally as opposed to vertically usually results in a smaller file size.
  • Combining similar colors in a sprite helps you keep the color count low, ideally under 256 colors so to fit in a PNG8.
  • “Be mobile-friendly” and don’t leave big gaps between the images in a sprite. This doesn’t affect the file size as much but requires less memory for the user agent to decompress the image into a pixel map. 100×100 image is 10 thousand pixels, where 1000×1000 is 1 million pixels

top

Don’t Scale Images in HTML

tag: images

Don’t use a bigger image than you need just because you can set the width and height in HTML. If you need
<img width="100" height="100" src="mycat.jpg" alt="My Cat" />
then your image (mycat.jpg) should be 100x100px rather than a scaled down 500x500px image.

top

Make favicon.ico Small and Cacheable

tag: images

The favicon.ico is an image that stays in the root of your server. It’s a necessary evil because even if you don’t care about it the browser will still request it, so it’s better not to respond with a 404 Not Found. Also since it’s on the same server, cookies are sent every time it’s requested. This image also interferes with the download sequence, for example in IE when you request extra components in the onload, the favicon will be downloaded before these extra components.

So to mitigate the drawbacks of having a favicon.ico make sure:

  • It’s small, preferably under 1K.
  • Set Expires header with what you feel comfortable (since you cannot rename it if you decide to change it). You can probably safely set the Expires header a few months in the future. You can check the last modified date of your current favicon.ico to make an informed decision.

Imagemagick can help you create small favicons

top

Keep Components under 25K

tag: mobile

This restriction is related to the fact that iPhone won’t cache components bigger than 25K. Note that this is the uncompressed size. This is where minification is important because gzip alone may not be sufficient.

For more information check “Performance Research, Part 5: iPhone Cacheability – Making it Stick” by Wayne Shea and Tenni Theurer.

top

Pack Components into a Multipart Document

tag: mobile

Packing components into a multipart document is like an email with attachments, it helps you fetch several components with one HTTP request (remember: HTTP requests are expensive). When you use this technique, first check if the user agent supports it (iPhone does not).

Avoid Empty Image src

tag: server

Image with empty string srcattribute occurs more than one will expect. It appears in two form:

  1. straight HTML

    <img src=””>

  2. JavaScript

    var img = new Image();
    img.src = “”;

 

Both forms cause the same effect: browser makes another request to your server.

  • Internet Explorer makes a request to the directory in which the page is located.
  • Safari and Chrome make a request to the actual page itself.
  • Firefox 3 and earlier versions behave the same as Safari and Chrome, but version 3.5 addressed this issue[bug 444931] and no longer sends a request.
  • Opera does not do anything when an empty image src is encountered.

 

 

Why is this behavior bad?

  1. Cripple your servers by sending a large amount of unexpected traffic, especially for pages that get millions of page views per day.
  2. Waste server computing cycles generating a page that will never be viewed.
  3. Possibly corrupt user data. If you are tracking state in the request, either by cookies or in another way, you have the possibility of destroying data. Even though the image request does not return an image, all of the headers are read and accepted by the browser, including all cookies. While the rest of the response is thrown away, the damage may already be done.

 

 

The root cause of this behavior is the way that URI resolution is performed in browsers. This behavior is defined in RFC 3986 – Uniform Resource Identifiers. When an empty string is encountered as a URI, it is considered a relative URI and is resolved according to the algorithm defined in section 5.2. This specific example, an empty string, is listed in section 5.4. Firefox, Safari, and Chrome are all resolving an empty string correctly per the specification, while Internet Explorer is resolving it incorrectly, apparently in line with an earlier version of the specification, RFC 2396 – Uniform Resource Identifiers (this was obsoleted by RFC 3986). So technically, the browsers are doing what they are supposed to do to resolve relative URIs. The problem is that in this context, the empty string is clearly unintentional.

HTML5 adds to the description of the tag’s src attribute to instruct browsers not to make an additional request in section 4.8.2:

The src attribute must be present, and must contain a valid URL referencing a non-interactive, optionally animated, image resource that is neither paged nor scripted. If the base URI of the element is the same as the document’s address, then the src attribute’s value must not be the empty string.

Hopefully, browsers will not have this problem in the future. Unfortunately, there is no such clause for <script src=””> and <link href=””>. Maybe there is still time to make that adjustment to ensure browsers don’t accidentally implement this behavior.

 

This rule was inspired by Yahoo!’s JavaScript guru Nicolas C. Zakas. For more information check out his article “Empty image src can destroy your site“.

top

from:http://developer.yahoo.com/performance/rules.html