Fw: [新闻] JVC回函澄清:蒋伟宁是无辜受害者之一

楼主: bmka (偶素米虫)   2014-07-16 18:50:53
抄袭 或者是 乱挂名
反正两者都违反学术伦理
蒋部长就自己选一条罪名吧
※ [本文转录自 AfterPhD 看板 #1JnQjBQ1 ]
作者: bmka (偶素米虫) 看板: AfterPhD
标题: Re: [新闻] JVC回函澄清:蒋伟宁是无辜受害者之一
时间: Wed Jul 16 06:29:29 2014
我希望科技部把蒋前部长这两篇文章印出来比对一下
文章A:
Chen, Chen-Wu, Po-Chen Chen, and Wei-Ling Chiang.
"Modified intelligent genetic algorithm-based
adaptive neural network control for uncertain structural systems."
Journal of Vibration and Control 19.9 (2013): 1333-1347.
文章B:
Chen, C. W., P. C. Chen, and W. L. Chiang.
"Stabilization of adaptive neural network controllers for nonlinear
structural systems using a singular perturbation approach."
Journal of Vibration and Control 17.8 (2011): 1241-1252.
很明显*至少*是self-plagiarism (这也是违反学术伦理的抄袭)
蒋前部长不要再说自己没抄袭了啦
脸会很肿的
因为数学式子难显示, 我只节录这两篇paper的Introduction的几个(连续)段落供比较
文章A:
...Many NN systems, which are essentially intelligent inference systems
implemented in the framework of adaptive networks, have been
developed to model or control nonlinear plants with remarkable results.
The desired performance can be obtained with fewer adjustable
parameters, although sometimes more training is required to achieve
the higher accuracy derived from the transfer function and the learning
algorithm. In addition to these features, NNs also act as a universal
approximator (Hartman et al., 1990; Funahashi and Nakamura, 1993)
where the feedforward network isvery important. A backpropagation
algorithm (Hecht-Nielsen, 1989; Ku and Lee, 1995), is usually used in
the feedforward type of NN but heavy and complicated learning is
needed to tune each network weight. Aside from the backpropagation
type of NN, another common feedforward NN is the radial basis function
network (RBFN) (Powell, 1987, 1992; Park and Sandberg, 1991).
文章B:
...Many NN systems, which are essentially intelligent inference systems
implemented in the framework of adaptive networks, have been
developed to model or control nonlinear plants, with remarkable results.
The desired performance can be obtained with fewer adjustable
parameters, although sometimes more training derived from the
transfer function and the learning algorithm is needed to achieve
sufficient accuracy. In addition, NN also acts as a universal approximator
so the feedforward network is very important (Hartman et al., 1990;
Funahashi and Nakamura, 1993). A backpropagation algorithm is usually
used in the feedforward type of NN, but this necessitates heavy and
complicated learning to tune each network weight (Hecht-Nielsen, 1989;
Ku and Lee, 1995). Besides the backpropagation type of NN, another
common feedforward NN is the radial basis function network (RBFN)
(Powell, 1987, 1992; Park and Sandberg, 1991).
文章A:
RBFNs use only one hidden layer. The transfer function of the hidden
layer is a nonlinear semi-affine function. Obviously, the learning rate
of the RBFN will be faster than that of the backpropagation network.
Furthermore, the RBFN can approximate any nonlinear continuous
function and eliminate local minimum problems (Powell, 1987, 1992;
Park and Sandberg, 1991). These features mean that the RBFN is
usually used for real-time control in nonlinear dynamic systems.
Some results indicate that, under certain mild function conditions,
the RBFN is capable of universal approximations (Park and Sandberg,
1991; Powell, 1992).
文章B:
The RBFN requires the use of only one hidden layer, and the transfer
function for the hidden layer is a nonlinear semi-affine function.
Obviously, the learning rate will be faster than that of the backpropagation
network. Furthermore, one can approximate any nonlinear continuous
function and eliminate local minimum problems with this method
(Powell, 1987, 1992; Park and Sandberg, 1991). Because of these features,
this technique is usually used for real-time control in nonlinear dynamic
systems. Some results indicate that, under certain mild function conditions,
the RBFN is even capable of universal approximations (Park and Sandberg,
1991; Powell, 1992).
文章A:
Adaptive algorithms can be utilized to find the best high-performance
parameters for the NN (Goodwin and Sin, 1984; Sanner and Slotine, 1992).
Adaptive laws have been designed for the Lyapunov synthesis approach
to tune the adjustable parameters of the RBFN, and analyze the stability
of the overall system. A genetic algorithm (GA) (Goldberg, 1989; Chen,
1998), is the usual optimization technique used in the self-learning or
training strategy to decide the initial values of the parameter vector.
This GA-based modified adaptive neural network controller (MANNC)
should improve the immediate response, the stability, and the robustness
of the control system
文章B:
Adaptive algorithms can be utilized to find the best high-performance
parameters for the NN. The adaptive laws of the Lyapunov synthesis
approach are designed to tune the adjustable parameters of the RBFN,
and analyze the stability of the overall system. A genetic algorithm (GA)
is the usual optimization technique used in the self-learning or training
strategy to decide the initial values included in the parameter vector
(Goldberg, 1989; Chen, 1998). The use of a GA-based adaptive neural
network controller (ANNC) should improve the immediate response,
stability, and robustness of the control system.
文章A:
Another common problem encountered when switching the control
input of the sliding model system is the so-called "chattering" phenomenon.
The smoothing of control discontinuity inside a thin boundary layer
essentially acts as a low-pass filter structure for the local dynamics, thus
eliminating chattering (Utkin, 1978; Khalil, 1996). The laws are updated
by the introduction of a boundary-layer function to cover parameter errors
and modeling errors, and to guarantee that the state errors converge
within a specified error bound.
文章B:
Another common problem encountered when switching the control
input of the sliding model system is the so-called “chattering” phenomenon.
Sometimes the smoothing of control discontinuity inside a thin boundary layer
essentially acts as a low-pass filter structure for the local dynamics, thus
eliminating chattering (Utkin, 1978; Khalil, 1996). The laws for this process
are updated by the introduction of a boundary-layer function to cover
parameter errors and modeling errors. This also guarantees that the
state errors converge within a specified error bound.
这不是抄袭,什么才是抄袭?
延伸阅读: The ethics of self-plagiarism
http://cdn2.hubspot.net/hub/92785/file-5414624-pdf/media/ith-selfplagiarism-whitepaper.pdf
Self-Plagiarism is defined as a type of plagiarism in which
the writer republishes a work in its entirety or reuses portions
of a previously written text while authoring a new work.
作者: jhyen (jhyen)   2014-07-16 06:39:00
其他的不要说,光这60篇被JVC退的找出来看就很精彩.......
楼主: bmka (偶素米虫)   2014-07-16 07:23:00
第二篇没在这被查出的60篇里面喔!看来未爆弹还很多
作者: MyDice (我爱林贞烈)   2014-07-16 08:10:00
科技部不会查这些 只能向JVC反应了
作者: wacomnow (无忧)   2014-07-16 08:19:00
推用心!记者快来抄呀
作者: WTFCAS (我爱黑袜宝贝)   2014-07-16 08:57:00
键盘又输入错误了…
作者: flashegg (闪光蛋)   2014-07-16 10:42:00
第二篇(2011较早的这篇)没在这被查出的60篇里面表示有可能是经过真的学者审查通过的吧?然后2013这篇因为self-plagiarism,所以不敢被审?才套假帐号然后被JVC接受刊出,以上是个人看法
楼主: bmka (偶素米虫)   2014-07-16 10:49:00
那就要问蒋伟宁了..他只能抄袭跟完全没看过paper二选一了
作者: flashegg (闪光蛋)   2014-07-16 10:50:00
再来CW Chen可以辩称2013这篇是2011的续作
楼主: bmka (偶素米虫)   2014-07-16 10:50:00
我猜应该还有其他的paper是套用同一个模板写出来的
楼主: bmka (偶素米虫)   2014-07-16 10:51:00
就算是续作,也不可以self-plagiarism,这是常识吧
作者: flashegg (闪光蛋)   2014-07-16 10:51:00
总之这种self-plagiarism在理工科paper不是没有见过最后也是被系/院教评会发还重审,不了了之
楼主: bmka (偶素米虫)   2014-07-16 10:54:00
抄袭就是抄袭,学术界自有评论 :)
作者: flashegg (闪光蛋)   2014-07-16 10:55:00
而且要是CW Chen出来坦,说没经过蒋同意就把老师挂上去纯粹只是因为受过老师指导、或尊重老师等等还怕蒋不能安全下庄吗?这也是个人看法~
楼主: bmka (偶素米虫)   2014-07-16 10:56:00
不告知挂个一篇那也就罢了,这么多年来挂了一堆,还不知被挂然后CV上还大大方方的登录...很难说得过去的其实我的猜测是蒋伟宁根本没看过这些文章(贡品),只是他不敢承认这些不是他的research,他违反学术伦理挂了名但是敢收学生的贡品就要敢扛啊,不能出事就推给学生
作者: flashegg (闪光蛋)   2014-07-16 11:03:00
这就是在道德操守与人性上打转啦假设CW Chen真的是在蒋不知情的状况下把老师挂上去paper被接受之后才跟老师说有挂名一事有多少老师会说,不行你马上把我的名字撤掉?我想还是会欣然接受的人比较多吧,还会觉得学生懂事呢
楼主: bmka (偶素米虫)   2014-07-16 11:05:00
那还是蒋的错,正常的处理方式应该是严正警告学生不可以如此做这种事以后不可以发生
作者: flashegg (闪光蛋)   2014-07-16 11:07:00
我并非赞同蒋的行为,只是想说这种事真的是屡见不鲜
楼主: bmka (偶素米虫)   2014-07-16 11:07:00
学术圈很小,自己的名声自己顾,何况是像蒋这种大咖
楼主: bmka (偶素米虫)   2014-07-16 11:08:00
我也了解屡见不鲜,但是敢做,出了事就别想卸责,如此而已要不是蒋一直卸责,我也懒得浪费时间看他们的废文(越看越气)还有,蒋也未免太饥不择食,这种三流期刊的paper也要挂
作者: tainanuser (南南南)   2014-07-16 11:42:00
推,很用心!
作者: MyDice (我爱林贞烈)   2014-07-16 12:05:00
可以从科技部或是蒋的网页看到他2010年以来的publication有多少吗? 尤其是当校长部长这段期间论文任意挂名的情况有多严重
作者: ceries (no)   2014-07-16 14:53:00
厉害!
作者: jabari (Still不敢开枪的娘娘腔)   2014-07-16 16:27:00
请问这个可以推给学运吗? 还是八年遗毒??
作者: jack5756 (Dilbert)   2014-07-16 17:09:00
真的都是学运的错,而且很多Paper是八年遗毒
作者: MIT8818 (台湾制造)   2014-07-16 18:57:00
这内容能看出他无辜?
作者: soultakerna   2014-07-16 18:57:00
居然直接复制贴上XD
楼主: bmka (偶素米虫)   2014-07-16 18:58:00
他不无辜啊,蒋部长的文章是抄袭而且是self-plagiarism这点是赖不掉的
作者: soultakerna   2014-07-16 18:58:00
有改那么一点点的样子lol
作者: soria (soria)   2014-07-16 18:59:00
唷,自我抄袭吗?
作者: soultakerna   2014-07-16 19:03:00
这几段有reference吗,抓不到原文我知道改一点点也是抄啦
作者: soria (soria)   2014-07-16 19:08:00
我知道他为什么第一天急着撇清了,因为这些问题肯定越挖越多
楼主: bmka (偶素米虫)   2014-07-16 19:09:00
僵尸审查就是用在让这种明显有问题的文章蒙混过关
作者: walei98 (超和平buster)   2014-07-16 19:15:00
高调
作者: offish (offish)   2014-07-16 19:19:00
没空细看,先高调
作者: soria (soria)   2014-07-16 19:37:00
魔鬼就藏在细节里面

Links booklink

Contact Us: admin [ a t ] ucptt.com