doomsday 404 sans 设定
【Hehe, today is the end, what are you running for】 名称:末日404sans 攻击:∝ 防御:∝ hp:∝ fun值确认:6~66 故事背景 在原undertale时间线中,其中一条范值为66的时间线,被神秘人干扰了,此时正值frisk带领怪物们回到地面上,人类和怪物正处于和平阶段,然而,那个神秘人来到了这里,在唤了一些非常具有攻击性的生物,Frisk带领怪物们去讨伐神秘人,结果可想而知frisk这边全部阵亡,残血的sans(sans和papyrus是这条时间线中血量最高的都是500000血)被击倒,跌跌撞撞回到伊波特山,进入了那个房间,看到了那个科学怪人,突然,他倒在了地上,等他再次醒来的时候,身体变化了,脑子里面多了很多不应该有的东西,以及很多能力召唤的方式,sans决定要用这种力量打败神秘人,然而,最终因为体力透支被神秘人打破了头骨,但是由于科学怪人的原因,他意外的跳出时间线,遇到了一个人,自称是掌管一切的神,他给予了sans一个装置,打开这个装置后,那个人的身影消失了,新东方science感觉到身体里有无穷的力量在涌跃,但是性格因此变化,变得冷漠,无情,残酷,暴力,但是身上的一切都证明了,他曾经是一个善良的骷髅 外貌 因为虚空之主的影响,变成了长夹克衫,上面有许多时间节点,这些时间节点都被封锁线缠绕,衬衫里面有一个钟表图案,代表他已经拥有掌管时间的能力,戴着绒毛兜帽,两眼为白色发着紫光,两边有吊带,后背有两个吊带,背面有生命之树,头上有裂缝 read-normal-img 正常时的样貌 能力 骨头 获得gd伤害(Gd模式下)可用来斩击,攻击为14 gb炮 又称末日GB,发射的黑色光柱可毁灭一半基层代码或多元宇宙,力度最大无法估计 陨灭闪电 可自己调整大小,最小可以毁灭基层代码,最大无法估计 时间烈焰 克制免疫物理攻击的人,可以让免疫物理攻击的人感受到物理伤害与魔法伤害 末日射线 一个灰色的射线可以delete一个时间线,取决于他愿不愿意 时间掌控 他可以无限次回溯,快进,暂停时间,对免疫时间暂停的人造成百分之75的代码伤害 重力控制 可以将目标撞向物体,造成15点伤害 Gd(The Destruction of God) 死亡聚合 将破灭之刃化为黑白(此技能为扩张领域使用) 反篡改 对篡改数据无效,并对篡改的人造成百分之95的代码伤害,或者直接清除 知晓万物 因为有Gaster的帮助,他知道一切的一切 影响 他可以抓取或扭曲周围的空间与贴图 空间撕裂 将周围的代码控制并撕裂,撕裂的空间可以将你置于死地 空间黑洞 可以吸附一切物质吸收进去的物,体会化为能量归他所用,也具有攻击性,会影响介质,影响周围的空间,扭曲你和虚拟连接的媒介 降维 可以将你的介质降维,使你无法逃脱 破灭一击 可以划开代码,对角色造成90%的代码伤害与100%的物理伤害 虚影 可以将身体虚化,躲避致命攻击 瞬移 可以从一个地方瞬移出很远的距离 100%加成 可以加强防御和物理攻击,并不会加强代码攻击 体术 力量最小可以打碎一块砖,最大可以毁灭一个多元宇宙 封锁线 那些缠绕时间节点的封锁线可以拽下来,并且缠住你,你无法挣脱,更不能移动 多元星空 两边的吊带可以创造一个空白宇宙,在此宇宙内末日404的力量增强百分之45 调整 调整你的人物建模以及代码,轻微调整,便可让你崩溃 管理 生命之树可以清除或创造一些生命,决定权在末日404手里 召唤 他可以召唤超现实!sans协助他(作者本人的化身) 附身 Gaster的技能,可以摧毁你的情绪以及心理 剔除 G的技能,可以将你剥离出此时间线 干扰 可以将你的程序摧毁或者死机 虚空之枪系列 虚空之枪•第一枪•控制 如果你向他冲来,他可以直接定住你,并且扭曲周围的时间与空间,在此期间,你无法移动,更不会感觉到 虚空之枪•第二枪•陨灭 他可以一枪将你的介质毁掉,或者你有什么连接的媒介 虚空之枪•第三枪•消除 它可以消除你对现实的影响,让你感受到真正的痛苦 虚空之枪•第四枪•激光 红色的激光,可以,只有毁掉你的基底代码 虚空之枪•第五枪•毁灭 这一枪为毁灭大部分生物的枪,并不建议针对单个人 虚空之枪•第六枪•无尽星空 将整个你所在的时间线变为星空的一部分 虚空之枪•第七枪•血日 整个时间线会变成红色,在此的时候,你会感觉到失血,最后你将会失血过多而死,如果为代码级别的,将会流逝代码 虚空之枪•第八枪•虚无 将你所在的整个世界线都清掉,什么痕迹都不会留下 虚空之枪•第九枪•无上毁灭 无限的力量让你窒息 虚空之枪•第十枪•无穷无尽 你会陷入到一个轮回,这个轮回会让你非常的痛苦,然而,你并不能停下 虚空之枪•第十一枪•无效的攻击 如果对他有致命性的攻击,会立马无效化,而且你将会被反噬,受到百分之95的代码冲击 虚空之枪•第十二枪•正义之轮回 你会受到正义的影响而流失决心,直到最后一点都不会剩下,此时会受到百分之95的物理攻击 虚空之枪•第十三枪•万物 凝聚了万物的力量,可足以毁灭一个宇宙,然而,这远远不够 虚空之枪•第十四枪•永远 你将会感受到被永远困在那里的痛苦,被永远囚禁在一个地方 虚空之枪•第十五枪•制裁 如果你开了bug,那么,他将会拿第15枪制裁你,你将会损失所有的血量与bug 虚空之枪•第十六枪•时间或空间 此枪会影响你周围时间的流动与变动,并不会影响其他地方,空间会被扭曲掉,你会被压缩成一个点,然后炸掉 虚空之枪•第十七枪•虚空之神的一击 此枪会将你所有的外层全部去掉,内层也将受损,受到100%的代码伤害 虚空之枪•第十八枪•灭世的审判 此枪出现,所有宇宙都将会感受到震动,而不会毁灭,如果他使用了第18枪,那么,他就是真的生气了,你将被永远的痛苦包绕 上帝一指 既可以创造万物,也可以毁灭万物 末日线系列 末日线•第一线•缠绕 将你缠住,你无法挣脱,无法抵抗,吸收你的体力 末日线•第二线•攻击 线上面会有倒刺,将你刺伤或者刺死 末日线•第三线•抽取 抽取你的决心,使你不能移动以及攻击 免疫 他可以免疫一切能躲掉的攻击,因为致命攻击都用虚化来躲 反分身 他是唯一的,绝对的,不可被替代,不可被接管 实力 W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑……(省略重复) 此时W=多元宇宙 W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑W↑↑↑……(省略重复,因为永远也不会写完)=W^W W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^W^↑→↑^……(省略重复,因为永远也不会写完)=(W^W)W (W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑(W^W)W↑→↑………(省略重复,因为永远也不会写完)=(W^W)^W (W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→(W^W)^W→↑→………(省略重复,因为永远也不会写完)=((W^W)^W)W…… (W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑(W↑→↑W)=W^W↑↑………(省略重复,因为永远也不会写完)=(W^W)W^W……… 我们永远都无法写完,那么这样的话太浪费时间了,所以说我们需要收集一个符号“!”来继续进行叠加 此时!=(W^W)W^W !^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^!^………(省略重复,因为永远也不会写完)=!↑↑↑! !↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑!↑↑↑……(省略重复,因为永远也不会写完)=!^→↑↑→^! !^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^!^→↑↑→^……(省略重复,因为永远也不会写完) 我们发现再往下去的话,就再也上不去了,因为此时已经达到了阿列夫0无论我们怎么使用符号?也无法达到阿列夫1那么我们就需要一些“-”来让他抵达阿列夫1,2,3,4…无限,甚至不动点,所以我们直接跳过这里,直接抵达 此时W=阿列夫不动点 (W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”(W^W)W^W“特殊符号”…………(省略重复)=(W↑↑↑W)↑↑↑W 把无限看做∝ ∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^∝^……(省略重复)…=∝↑↑∝ 我们发现使用“特殊符号”后可以轻松叠到阿列夫无限甚至不动点 那么 我们设阿列夫不动点=N(代替∞) ↑(高德纳箭头)→(康威链) 我们用↑来提升N N↑↑↑↑↑…N 而→这么用可以直接等于上面 N→N→N那我们一直用→提升N→N→N→N→N→N… 我们用<(迭代)这个符号可以让一个数达到本无法抵达的数 所有阿列夫数<<<<<<……=不可达基数<<<<<<<<<……=马洛基数<<<<<<<<……=紧致基数<<<<<<<<……=可测基数<<<<<<<<<……可展开基数<<<<<<<……=强紧基数<<<<<<<<<……=超紧基数<<<<<<<<<……=巨大基数<<<<<<<<<……=超巨大基数<<<<<<<<……=1=0莱因哈特基数<<<<<<<<<……=伯克利基数<<<<<<<<<<……=一切大基数<<<<<<<<……=V=L 在此之后我们将这些设为{1} {1}与{2}的差距如同0-V=L一样,而{2}还有{3}、{4}、{5}…{N} 我们将这一切比做A,A后面有B A与B的差距相当与0-A,而B后面还有C而B与C的差距如同0-B同理还有DEFG…而在之后还有AZ、BZ…ZZZZ…永无止境 宇宙的起源:宇宙大爆炸 然而,我们可以继续创造其他宇宙,制造无限个大爆炸无限个宇宙,无限次方个可能与改变,改变这一切,无限对叠向上再向上,永无止境,可能有无限个,我们设为一还会有23456个无限可能与结果,到达无限个无限可能后,并未到达终点,这只是一个宇宙的上限,可能有很多一个无限无限可能,两个无限无限可能,三个无限无限可能,四个无限无限可能……一直打到无限个无限无限可能,我们将这些串起来组成一个时间线,这条时间线上会有无限个多元宇宙,这样的时间线会有无限个,也就是无限个无限次方多元宇宙,因为这只是万千世界线中的其中一条,这就达不到终点,让我们无限的世界线后会有很多结结果,而结果并未达到我们想要的无限与真正意义上的无限,那么让我们继续,在无限个时间线后,会有一个连接,这无限个时间线的媒介可能有两个无限世界,三个无限世界,四个无限世界以及无限个无限世界。如果我们到了尽头,就不会是这样的 无限个媒介会有一个星球,然后继续两个,三个,四个,五个,以至于无限个,然后就会到达可观测宇宙,无限堆叠,还会有两个,三个,四个,五个乃至无限个可观测宇宙,这些都被大宇宙涵盖之后,会有一个两个三个四个五个甚至无限个大宇宙,这些都被涵盖其中,被星空宇宙包围,会有一个两个三个四个五个,直到无限个,这些都被已知最大宇宙涵盖,全能宇宙这可能还会有一个两个三个四个五个甚至无限个后被压缩成一个点,这个点会有很多,有一个两个三个四个五个甚至无限个点,然后就会又回到开头很多很多的轮回,无法挣脱,无法逃避,这被算作1次轮回,后面会有二个,三个,四个,五个,甚至无限个轮回,我们找这无限个轮回次方,再连接算作a,后面还有b后,有c后有d,以至于z升到AA AB,直到z无限个z次方,组成一个大a,继续a延伸到aaa bac Ada EAF,AG,知道zzzzzzz无限个z 把这些都看做阿列夫的话那就是现在以阿列夫为单位设为0,将这些乘一,乘二,乘三,乘四,乘五,乘六甚至乘无限,然后继续到我们想要的结局, 将这个结局比拟为一,123456来回堆叠无限制下去简单来说,我们可以把它当作为一个无限之上的符号∝此时为无限的无限乘方也就相当于无限盒子级别的无限实力,将总量变为乘2×3×4×5×6甚至到×无限 Let  be supercompact. The basic problem that concerns us is whether there is an -like inner model  with  supercompact in . Of course, the shape of the answer depends on what we mean by “-like”. There are several possible ways of making this nontrivial. Here, we only adopt the very general requirement that the supercompactness of  in  should “directly trace back” to its supercompactness in . Recall: We use  to denote the set . An ultrafilter (or measure)  on  is fine iff for all  we have . The ultrafilter  is normal iff it is -complete and for all , if  is regressive -ae (i.e., if ) then  is constant -ae, i.e., there is an  such that .  is supercompact iff for all  there is a normal fine measure  on . It is a standard result that  is supercompact iff for all  there is an elementary embedding  with , , and  (or, equivalently, ). In fact, given such an embedding , we can define a normal fine  on  by  iff . Conversely, given a normal fine ultrafilter  on , the ultrapower embedding generated by  is an example of such an embedding . Moreover, if  is the ultrafilter on  derived from  as explained above, then . Another characterization of supercompactness was found by Magidor, and it will play a key role in these lectures; in this reformulation, rather than the critical point,  appears as the image of the critical points of the embeddings under consideration. This version seems ideally designed to be used as a guide in the construction of extender models for supercompactness, although recent results suggest that this is, in fact, a red herring. The key notion we will be studying is the following: Definition.  is a weak extender model for ` is supercompact’ iff for all  there is a normal fine  on  such that: , and . This definition couples the supercompactness of  in  directly with its supercompactness in . In the manuscript, that  is a weak extender model for ` is supercompact’ is denoted by . Note that this is a weak notion indeed, in that we are not requiring that  for some (long) sequence  of extenders. The idea is to study basic properties of  that follow from this notion, in the hopes of better understanding how such an  model can actually be constructed. For example, fineness of  already implies that  satisfies a version of covering: If  and , then there is a  with . But in fact a significantly stronger version of covering holds. To prove it, we first need to recall a nice result due to Solovay, who used it to show that  holds above a supercompact. Solovay’s Lemma. Let  be regular. Then there is a set  with the property that the function  is injective on  and, for any normal fine measure  on , . It follows from Solovay’s lemma that any such  is equivalent to a measure on ordinals. Proof. Let  be a partition of  into stationary sets. (We could just as well use  for any fixed . Recall that  and similarly for  and .) It is a well-known result of Solovay that such partitions exist. Hugh actually gave a quick sketch of a crazy proof of this fact: Otherwise, attempting to produce such a partition ought to fail, and we can therefore obtain an easily definable -complete ultrafilter  on . The definability in fact ensures that , contradiction. We will encounter a similar definable splitting argument in the third lecture. Let  consist of those  such that, letting , we have , and  is stationary in . Then  is 1-1 on  since, by definition, any  can be reconstructed from  and . All that needs arguing is that  for any normal fine measure  on . (This shows that to define -measure 1 sets, we only need a partition  of  into stationary sets.) Let  be the ultrapower embedding generated by , so . We need to verify that . First, note that . Letting , we then have that . Since  is regular, it follows that . Let . In , the  partition  into stationary sets. Let  The point is that . To prove this, note first that  and that  is an -club of , since  is continuous at  points. Thus, for all  we have  and it follows that  is stationary in . Hence . Since , then . But , and this is an -club. It follows that no other  can meet  stationarily. So , and this completes the proof.  Solovay’s lemma suggests that perhaps it is possible to build  models for supercompactness in a simpler way than anticipated, by using ultrafilters on ordinals to witness supercompactness. Our key application of the lemma is the following (which, Hugh points out, could easily have been discovered right after Solovay’s lemma was established): Corollary. Suppose  is a weak extender model for ` is supercompact’. Suppose  is a singular cardinal. Then:  is singular in . . Note that item 1. is immediate from covering if , but a different argument is needed otherwise. Item 2. is a very -like property of . It is not clear to what extent there is a non-negligible (in some sense) class of cardinals for which  computes their cofinality correctly. Proof. This is immediate from Solovay’s lemma. Both 1. and 2. follow at once from:  If  is regular in , then .  If  is singular but regular in , then , but this is impossible since  is singular.  If  is singular but , then , contradicting that  is singular. It remains to establish . For this, we use Solovay’s lemma within . Let  be a normal fine ultrafilter on  such that  and . Note that such  exists, even if  is not a cardinal in : Just pick a larger regular cardinal in , and project the appropriate measure. By Solovay’s lemma there is  such that  is 1-1 on . Suppose that . In , let  be club, . Then  since  for  the ultrapower embedding induced by . However, if , then  while , by fineness. Contradiction.  It follows that if  is supercompact in  and in a forcing extension a -regular  turns into singular while measures on all  in  lift (so, in particular, supercompactness of  is preserved in the extension), then  is no longer a cardinal in the extension. We arrive at a key notion. Say that an inner model  is universal iff (sufficiently) large cardinals relativize down to . The corollary seems to suggest that weak extender models for supercompactness ought to be universal, so solving the inner model problem for supercompactness essentially solves the problem for all large cardinals. In fact, we have: Universality Theorem. Suppose  is a weak extender model for ` is supercompact’. Suppose ,  is elementary, and . Then . We will present the proof in the next lecture. In brief: Any extender that coheres with  and has large critical point is in . To see why this is a universality result, notice for example that if in  there is a proper class of -huge cardinals (for all ), then there is such a class in . Contrast this with the traditional situation in inner model theory, where inner models for a large cardinal notion do not capture any larger notions. (Similar results hold for rank into rank embeddings and larger, though some additional ideas are required here.) In a sense, the universality theorem says that  must be rigid. This is not literally true, but it is in the appropriate sense that there can be no sharps for : Corollary. Suppose  is an extender model for ` is supercompact’. Then there is no  with . Proof. Otherwise,  is amenable to , by the universality theorem. But then , contradicting Kunen’s theorem.  (This is another -like feature that  inherits.) Note the restriction to . This cannot be removed: Example. Suppose  is supercompact and  is measurable. Let  be a normal measure on , and let  be the -th iterate of the ultrapower embedding . Then:  is a weak extender model for ` is supercompact’. , so we cannot drop “” in the Corollary. Let  where  is the critical sequence ( for all ). Then  where  is the -th iterate of . It follows that  is closed under -sequences. Since  is a forcing extension of  by small forcing (Prikry forcing),  is also a weak extender model for ` is supercompact’, and clearly  as well. Hence, “” cannot be dropped from the Corollary, even if we require some form of strong closure of . We are now in the position to state a key dichotomy result, the proof of which will occupy us in the third lecture. Definition.  is extendible if for all  there is  with  and . Lemma. Assume  is extendible. The following are equivalent:  is a weak extender model for ` is supercompact’. There is a regular  that is not measurable in . There is a  such that . Note that this is indeed a dichotomy result: In the presence of extendible cardinals, either  is very close to , or else it is very far. Conjecture. If  is extendible, then  is an extender model for ` is supercompact’. Let us close with a brief description of the proof of the Dichotomy Lemma. Note we already have that items 2. and 3. follow from 1. To prove , given , we consider the -club filter on , and try in  to split  into stationary sets in . Failure of this will give us that  is measurable in . Assuming 2., this means we succeed, and we will use the stationary sets to verify that normal fine measures on  are absorbed into . Then extendibility will give us a proper class of such , and item 1. follows. In this lecture, we prove: Universality Theorem. If  is a weak extender model for  is supercompact’, and  is elementary with , then . As mentioned before, this gives us that  absorbs a significant amount of strength from . For example: Lemma. Suppose that  is 2-huge. Then, for each  There is a proper class of huge cardinals witnessed by embeddings that cohere .  Hence, if  and , then There is a proper class of huge cardinals. Here, coherence means the following:  coheres a set  iff, letting , we have  and . Actually, we need much less. We need something like  and for hugeness,  already suffices. This methodology breaks down past -hugeness. Then we need to change the notion of coherence, since (for example, beginning with ) to have  is no longer a reasonable condition. But suitable modifications still work at this very high level. The proof of the universality theorem builds on a reformulation of supercompactness in terms of extenders, due to Magidor: Theorem (Magidor). The following are equivalent:  is supercompact. For all  and all , there are  and , and an elementary  such that:  and .  and . The proof is actually a straightforward reflection argument. Proof.  Suppose that item 2. fails, as witnessed by . Pick a normal fine  on  where , and consider . Then , , and . But then , and, by elementarity,  are counterexamples to item 2. in  with respect to . However, , and it witnesses item 2. in  for  with respect to . Contradiction.  Assume item 2. For any  we need to find a normal fine  on . Fix , and let  and . Let  be an embedding as in item 2. for . Use  to define a normal fine  on  by  iff . Note that , so this definition makes sense. Further, , so . Hence,  is in the domain of , and  is as wanted.  As mentioned in the previous lecture, it was expected for a while that Magidor’s reformulation would be the key to the construction of inner models for supercompactness, since it suggests which extenders need to be put in their sequence. Recent results indicate now that the construction should instead proceed directly with extenders derived from the normal fine measures. However, Magidor’s reformulation is very useful for the theory of weak extender models, thanks to the following fact, that can be seen as a strengthening of this reformulation: Lemma. Suppose  is a weak extender model for ` is supercompact’. Suppose  and . Then there are  in  and an elementary  such that: , , , and . . . Again, the proof is a reflection argument as in Magidor’s theorem, but we need to work harder to ensure items 2. and 3. The key is: Claim. Suppose . Then there is a normal fine  on  such that  The transitive collapse of  is , where  is the transitive collapse of . Proof. We may assume that  and that this also holds in . In , pick a bijection  between  and , and find  on  with  and . It is enough to check   The transitive collapse of  is a rank initial segment of . Once we have , it is easy to use the bijection between  and  to obtain the desired measure . To prove , work in , and note that the result is now trivial since, letting  be the ultrapower embedding induced by the restriction of  to , we have that  collapses to , which is an initial segment of .  Proof of the lemma. The argument is now a straightforward elaboration of the proof of Magidor’s theorem, using the claim just established. Namely, in the proof of  of the Theorem, use an ultrafilter  as in the claim. We need to see that (the restriction to  of) the ultrapower embedding  satisfies . We begin with  much larger than  such that , and fix sets  such that , and a bijection  such that  is a bijection between  and  and . We use  to transfer  to a measure  on  concentrating on . Now let  be the ultrapower embedding. We need to check that . The issue is that, in principle,  could overspill and be larger. However, since  concentrates on , this is not possible, because transitive collapses are computed the same way in , , and , even though  may differ from .  We are ready for the main result of this lecture. Proof of the Universality Theorem. We will actually prove that for all cardinals , if  is elementary, and , then . This gives the result as stated, through some coding. Choose  much larger than , and let . Apply the strengthened Magidor reformulation, to obtain ,  and , and an embedding  with , , , and . Note that . It is enough to show that , since , and so  as well. For this, we actually only need to show that , since the fragment  of  determines  completely. The advantage, of course, is that it is easier to analyze sets of ordinals. Let  with , and let . We need to compute in  whether . For this, note that  iff . Now, , so this reduces to , i.e., to compute , it suffices to know . Recall that , and consider . Note that , and . Applying  to , and using elementarity, we have . But  because , while . It follows that . Since , we have  (simply note the range of ), and we are done, because we have reduced the question of whether  to the question of whether , which  can determine.  Note how the Universality Theorem suggests that the construction of  models for supercompactness using Magidor’s reformulation runs into difficulties; namely, if  is supercompact, we have many extenders  with critical point  and , and we are now producing new extenders above , that should somehow also be accounted for in . A nice application of universality is the dichotomy theorem for  mentioned at the end of last lecture. If  is a weak extender model for supercompactness, we obtain the following: Corollary. There is no sequence of (non-trivial) elementary embeddings  with well-founded limit.  It follows that there is a -definable ordinal such that any embedding fixing this ordinal is the identity! This is because  where  is the -theory in  of the ordinals. In particular, there is no . Note that the corollary and this fact fail if  is replaced by an arbitrary weak extender model. The question of whether there can actually be embeddings  in a sense is still open, i.e., its consistency has currently only been established from the assumption in  that there are very strong versions of Reinhardt cardinals, i.e., strong versions of embeddings , the consistency of which is in itself problematic. (On the other hand, Hugh has shown that there are no embeddings , and this can be established by an easy variant of Hugh’s proof of Kunen’s theorem as presented, for example, in Kanamori’s book (Second proof of Theorem 23.12).)In the previous lecture we established the Universality Theorem, a version of which is as follows: Theorem. Suppose  is a weak extender model for ` is supercompact’. If  is elementary, with  and , then . More general versions hold, and even can be obtained directly from the argument from last lecture. For example, suppose that  is supercompact and  is strongly inaccessible. Let  be a normal fine measure on , let , and consider . Then, in ,  is a weak extender model for ` is supercompact’. This construction typically “inverts” all forcing constructions one may have previously done, while essentially absorbing all large cardinals in . Foreman has studied this construction in some detail. Question. Let  be extendible. Is  a weak extender model for ` is supercompact’? Conjecture. This is indeed the case. To motivate the conjecture, we argue that refuting it must use techniques completely different from what we currently have at our disposal. (A closely related fact is that if  is extendible, then it is -supercompact (i.e., for all  there is a -supercompactness embedding  with ). Sargsyan has verified that extendible cannot be replaced with supercompact in this case.) Lemma. Suppose that there is a proper class of Woodin cardinals and every  set  is universally Baire. Then the -conjecture holds in .  This can be seen as evidence towards the conjecture, since the -conjecture holds in all known extender models. Moreover, the lemma is evidence that, if the conjecture holds, then large cardinals cannot refute the -conjecture. Definition. Suppose  is regular. Say that  is -strongly measurable in  iff there is a  with  for which there is no partition  of  into sets that are stationary in . Being -strongly measurable in  is a strong requirement on : In that case, we can perform the following procedure: Start with . Working in , construct a binary tree of splittings of  as follows: Split  into two -stationary sets, both in , if possible. Then, consider these two sets and, if possible, split each into two -stationary sets in , and continue this way, taking intersections along branches (in ) at limit stages. Note that the construction is in  even if it refers to true stationarity, since this can be represented in  by making reference at each stage to membership in the -filter of -club subsets of  (for  the stationary set we are trying to split at a given point in the construction). Suppose the construction lasts  stages. Since , it cannot be that the construction stops because at limit stages we do not see enough branches. Hence it must be that we stop at a successor stage, and this must happen along each path through the tree. As a consequence, we have split  into a small number of stationary sets, all of which carry, in , a -complete ultrafilter (namely, the restriction of the -club filter). This is a very strong way of witnessing the measurability of  in , and it is quite difficult to mimic this result with forcing. -Conjecture. There is a proper class of cardinals  that are regular in  and are not -strongly measurable in . This is a very plausible conjecture: It is not known if there can be more than 3 cardinals that are -strongly measurable in . It is not known if the successor of a singular of uncountable cofinality can be -strongly measurable in . It is not known whether there can be any cardinals above a supercompact that are -strongly measurable in . The take-home message is that infinitary combinatorics above a supercompact is hard, since supercompactness is extremely fragile. Theorem. Suppose that  is extendible. Then the following are equivalent:  is a weak extender model for ` is supercompact’. There is some  that is not -strongly measurable in . Hence if item 2. fails, every regular  is measurable in  and, in particular,  for any . As mentioned previously, there is a scenario for the failure of item 2.: It can be forced in  over  if there is a very strong version of Reinhardt cardinals. But this should really be understood as a scenario towards refuting the existence of Reinhardt cardinals in , at least in the presence of additional strong large cardinal assumptions. Proof.  This we already know, since in the corollary shown in the first lecture we saw that item 1. implies that  computes some successors correctly.  Here we will need to use extendibility. Let  be a cardinal witnessing item 2. Claim. For all  there are a  and a partition  of  into stationary sets. Proof. Fix . Note that for all  there is a partition in  of  into -many stationary sets. Since  is extendible, we can find an embedding  with  much larger than , ,  and  (for example, we could pick  so that ). Since  is not -strongly measurable in , then  is not -strongly measurable in . But  and . This gives us the desired result.  Fix  with . Then . Pick an elementary  with , . Note that . Claim. For all , . Since for all  we have , from this is follows that  is a weak extender model for ` is supercompact’. Proof. Similar to the proof of Solovay’s lemma in Lecture 1. Fix  and choose a regular  with , and a partition  of  into stationary sets. Let , and note that , as the latter is regular. Let  and note that  and  is a partition of  into stationary sets. Let  is stationary in , and note that . We can now argue that  just as in the proof of Solovay’s lemma.  Since  for all , if we let  be the measure on  derived from , we have that  concentrates on , and its restriction to  is in . This proves that  is a weak extender model for ` is supercompact’. But then we are done, by elementarity.  Let us close with some general and sober remarks that Hugh made on how one would go about building extender models. These coarse models use extenders from  (as in the requirement for weak extender models), and typically their analysis suggests how to proceed to their fine-structural counterpart. When looking at the coarse version for supercompactness, as mentioned before, Magidor’s reformulation is ideally suited to build the models, and this was the original approach of the `suitable extender sequences’ manuscript. Recent results indicate that comparison fails for these models past superstrongs, and in fact, all of  can be coded into these models. This is a serious obstacle to a fine-structural version. Current results suggest that even if one modifies this approach and directly uses as the extenders in the sequence some measures on ordinals to code supercompactness (which is possible, by Solovay’s lemma), comparison should fail as well around -supercompactness. This suggests two scenarios, neither particularly appealing: Either iterability (in very general terms) fails, which would force us to completely change the nature of fine-structure theory before we can solve the inner model program for supercompactness, or else the construction of the models collapses quickly, and so a different not yet foreseen approach would be required. 琐事 末日404是13维生物 末日404身体里有三种力量 末日404不喜欢芹菜 末日404没有感情,但是很疼爱自己的老婆 末日404无法死去 关系 超现实!sans(老婆) 隐星sans(儿子)