38 comments

  • zbycz 22 hours ago
    If you download a Data export, the timestamps are there for every conversation, and often for messages as well.

    The html file is just a big JSON with some JS rendering, so I wrote this bash script which adds the timestamp before the conversation title:

      sed -i 's|"<h4>" + conversation.title + "</h4>"|"<h4>" + new Date(conversation.create_time*1000).toISOString().slice(0, 10) + " @ " + conversation.title + "</h4>"|' chat.html
    • gnyman 17 hours ago
      This is a bit of sidetrack, but in case someone is interested in reading their history more easily. My conversations.html export file was ~200 MiB and I wanted something easier to work with, so I've been working on a project to index and make it searchable.

      It uses the pagefind project so it can be hosted on a static host, and I made a fork of pagefind which encrypts the indexes so you can host your private chats wherever and it will be encrypted at rest and decrypted client-side in the browser.

      (You still have to trust the server as the html itself can be modified, but at least your data is encrypted at rest.)

      One of the goals is to allow me to delete all my data from chatgpt and claude regularly while still having a private searchable history.

      It's early but the basics work, and it can handle both chatgpt and claude (which is another benefit as I don't always remember where I had something).

      https://github.com/gnyman/llm-history-search

      • tomzx 8 hours ago
        Seems we have a common goal here of being able to search history on ChatGPT/Claude.

        Check this project I've been working on which allows you to use your browser to do the same, everything being client-side.

        https://github.com/TomzxCode/llm-conversations-viewer

        Curious to get your experience trying it!

    • stuaxo 1 hour ago
      There's a github project which converts them to markdown which works fairly well too.
    • caminanteblanco 19 hours ago
      Do you know if this is available in the actual web interface, and just not displayed, or is it just in the data export? If it is in the web, maybe a browser extension would be worth making.
      • zbycz 18 hours ago
        I checked, and yes - the field "create_time" is available both for coversation and for each message. The payload looks the same as the exported JSON.

        Look for this API call in Dev Tools: https://chatgpt.com/backend-api/conversation/<uuid>

  • FloorEgg 19 hours ago
    My guess is that including timestamps in messages to the LLM will bias the LLMs responses in material ways, and ways they don't want, and showing timestamps to users but not the LLM will create confusion when the user assumes the LLM is aware of them but it isn't. So the simple product management decision was to just leave them out.
    • qweiopqweiop 17 hours ago
      I'd bet this is correct. I'd also bet you've worked on user facing features.
    • caminanteblanco 19 hours ago
      I could definitely see that being an issue, but like with so many UX decisions, I wish they would at least hide the option somewhere in a settings menu.

      I also don't think it would be impossible to give the LLM access to the timestamps through a tool call, so it's not constantly polluting the chat context.

      • joquarky 16 hours ago
        Adding any unnecessary content to the context decreases inference quality.
    • Kailhus 17 hours ago
      That's no excuse imho. I see 2 different endpoints, 1 for llm stream and 1 for msg history (with stamps). New timestamps could be added FE as new messages start without polluting the user input for example
      • FloorEgg 16 hours ago
        How many years experience do you have managing products with millions of users?
  • Valid3840 1 day ago
    ChatGPT still does not display per-message timestamps (time of day / date) in conversations.

    This has been requested consistently since early 2023 on the OpenAI community forum, with hundreds of comments and upvotes and deleted threads, yet remains unimplemented.

    Do any of you could think of a reason (UX-wise) for it not to be displayed?

    • Workaccount2 1 day ago
      Regular people hate numbers.

      Not a joke. To capture a wide audience you want to avoid numbers, among other technical niceties.

      • madeofpalk 20 hours ago
        Isn't it just simpler to believe that ChatGPT doesn't have timestamps because... they never added them? It wasn't in the original MVP prototype and they've just never gotten around to it?

        Surely there's enough people working in product development here to recognise this pattern of never getting around to fixing low-hanging fruit in a product.

        • observationist 20 hours ago
          They exist in the exported data. It'd require a weekend's worth of effort to roll out a new feature that gives users a toggle to turn timestamps off and on.

          It's trivial, but we will never see it. The people in charge of UX/UI don't care about what users say they want, they all know better.

          • tezza 19 hours ago
            Yeah… even in the Web interface if you crack open Developer Tools and look at the json, the timestamps are all there, available in the data model. Those values are simply not displayed to the end user.

            I was looking to write a browser extension and this was a preliminary survey for me.

          • madeofpalk 17 hours ago
            There’s a very long list of “weekends’s worth of effort” jobs that exist in our product that’ll probably never get done because of just the general distinctions of product development instead of some conspiracy by Big Designer.
      • dymk 22 hours ago
        This makes sense only if you don’t think about it at all.
        • baobun 16 hours ago
          Much like the product itself. I guess it fits.
        • make3 15 hours ago
          People on HN are not regular users in any way, shape or form.

          It's just the "cognitive load" UX idea, with extremely non-technical people having extremely low limits before they decide to never try again, or just feel intimidated and never try to begin with.

          It's the Apple story all over again.

          https://lawsofux.com/cognitive-load/

      • johnfn 22 hours ago
        Do regular people not use any mainstream messaging app - Messenger, iMessage, etc?
        • HWR_14 21 hours ago
          Both of those by default hide timestamps
          • cpncrunch 20 hours ago
            I just checked Messenger, and it shows timestamps under each message. I didn't change any settings to get that.
        • almosthere 21 hours ago
          It's not like chatgpt suddenly messages you at 3am and says, I don't feel well. It's all time that you talked to it.
      • smelendez 1 day ago
        Make it a toggle then, like a lot of popular chat apps?
        • Y_Y 1 day ago
          There's only one thing they hate more than numbers...
          • subscribed 18 hours ago
            Not knowing there are toggles inside settings they don't even know eziatt.

            Yeah, we know. This is why there are defaults and only defaults.

      • GaryBluto 20 hours ago
        Makes sense. ChatGPT is the McDonalds of LLMs.
      • lofaszvanitt 10 hours ago
        UX/UI research if it exists at all is akin to religious healers who touch you on your head and bam you can suddenly walk after spending 25 years in a wheelchair.

        Hogwash.

        • DANmode 9 hours ago
          So, do you you think all dev teams are sufficient at UX,

          or UX doesn’t exist?

          • lofaszvanitt 9 hours ago
            I say that 99.5% of the UI/UX blog posts I've read in the last 10 years were all hogwash. Gloating about spacing, gaps, unnecessary I know this better mantra that leads to nowhere.

            And it shows. Show me a platform where you have proper user experience and not some overgeneralized ui, that reeks of bad design. Also, defaults used everywhere.

      • drdaeman 18 hours ago
        I’m sorry but this really sounds like a made-up idea. Is there any actual repeatable research that could back this claim?
        • make3 15 hours ago
          It's just the "cognitive load" UX idea, with extremely non-technical people having extremely low limits before they decide to never try again, or just feel intimidated and never try to begin with.

          It's the Apple story all over again.

          https://lawsofux.com/cognitive-load/

          • DangitBobby 13 hours ago
            But every chat app I use (including SMS) has dates and times indicated either directly on the message or in the thread.
            • DANmode 8 hours ago
              iOS groups days of messages under a single timestamp.

              You have to drag-over for any detail.

      • littlestymaar 20 hours ago
        It must be false, because if that was true, marketing people would not be putting numbers everywhere when naming products.
      • vasco 1 day ago
        > Regular people hate numbers

        What does this even mean

        • Workaccount2 21 hours ago
          There is a non-trivial number of people who get an adverse reaction to anything technical, including the language of technical - numbers. Numbers are the language of confusion, not getting it, feeling inadequate, nerds and losers, stupid math, and the "cold dead machines".

          The thing is that people who are fine with numbers will still use those products anyway, perhaps mildly annoyed. People who hate numbers will feel a permeating discomfort and gravitate towards products that don't make them feel bad.

          • madeofpalk 20 hours ago
            And so the assertion here is that "when a message was sent" is too technical?

            I think we need to give people slightly more credit. If this is true, maybe its because we keep infantalising them?

            • Izkata 14 hours ago
              I can see something along those lines being why so many things designed for everyone have drifted towards soft "three weeks ago" dates.
            • vasco 20 hours ago
              Well the infantilization is directly in the sentence "regular people". Distinguishing the big brains from the plebes.
          • crazygringo 19 hours ago
            So you're claiming people are annoyed by... clocks? Prices? Sports scores?

            I genuinely can't tell if this is sarcasm or not.

            An adverse reaction to equations, OK. Numbers themselves, I really don't know what you're talking about.

        • kingstnap 21 hours ago
          It's something extremely pervasive in modern design language.

          It actually infuriates me to no end. There are many many many instances where you should use numbers but we get vague bullshit descriptions instead.

          My classic example is that Samsung phones show charging as Slow, Fast, Very fast, Super fast charging. They could just use watts like a sane person. Internally of course everything is actually watts and various apps exist to report it.

          Another example is my car shows motor power/regen as a vertical blue segmented bar. I'm not sure what the segments are supposed to represent but I believe its something like 4kW or something. If you poke around you can actually see the real kW number but the dash just has the bar.

          Another is WiFi signal strength which the bars really mean nothing. My router reports a much more useful dBm measurement.

          Thank god that there are lots of legacy cases that existed before the iPhone-ized design language started taking over and are sticky and hard to undo.

          I can totally imagine my car reporting tire pressure as low or high or some nonsense or similarly I'm sure the designers at YouTube are foaming at the mouth to remove the actual pixel measurements from video resolutions.

          • rdiddly 20 hours ago
            It's all rather dumb, but your examples are really counterexamples, because a watt is sadly not something most people understand. One would at minimum need to have passed a physics class, and even that doesn't necessarily leave a person with an intuitive, visceral understanding of what a watt is, feels like, can do. I appreciate my older Samsung phone that just converts it into expected time until full charge. That's the number that matters to me anyway, and I can make my own value judgment about how "super" the fastness is. But I do agree with your point and would be pissed if they dumbed it down to Later, Soon, Very Soon and Super Soon.

            Speaking of time and timestamps, which I would've thought were straightforward, I get irked to see them dumbed-down to "ago" values e.g. an IM sent "10 minutes ago" or worse "a day ago." Like what time of day, a day ago?

            • 201984 20 hours ago
              Most people can understand "bigger number better". They don't need the full theoretical derivation of the watt as a unit of power for that.
              • Izkata 14 hours ago
                And just through exposure over time they'd learn "my phone usually charges around X" and be able to see if their new cable is actually charging faster or not.
          • baobun 15 hours ago
            In US, washing machines have "cold", "warm", "hot" settings. In Europe, you have a temperature knob "30C", "40C", "60C".

            Like you, I don't buy the argument that people are actually too dumb to deal with the latter or are allergic to numbers. People get used to and make use of numbers in context naturally if you expose them.

            • bananaflag 15 hours ago
              I have a machine which has cold/warm/hot because it doesn't heat water by itself, it just takes whatever hot water there exists in the house (and "warm" means 50% hot water and 50% cold).
              • baobun 14 hours ago
                Practical.

                I still think anyone who grew up with such a machine would be able to graduate to a numerical temp knob without having a visceral reaction over the numbers every time they do laundry.

        • throw-12-16 20 hours ago
          Most people are dumb as rocks and ux/ui is built around that fact.
          • falcor84 16 hours ago
            Well, that's obviously an exaggeration, but in any case, there's a choice here. Historically interface designers expected users to read a manual, and later to at least go through some basic onboarding and then read the occasional "tip of the day", before finally arriving at the current "don't make me think" approach. It's not too late to expect people to think again.
          • joquarky 16 hours ago
            It was in the past, but very little effort is being put into UX these days.
        • jennyholzer2 23 hours ago
          At the start of 2025 I stopped buying Spotify and started buying Apple Music because I felt manipulated by the Spotify application's metrics-first design.

          I felt that Spotify was trying to teach me to rely on its automated recommendations in place of any personal "musical taste", and also that those recommendations were of increasingly (eventually, shockingly), poor quality.

          The implied justification for these poor recommendations is a high "Monthly Listener Count". Don't mind that Spotify can guarantee that any crap will have a high listener count by boosting it's place in their recommendation algorithm.

          I think many people may have a similar experience on once-thriving social media platforms like facebook/instragram/X.

          What I mean to say is that I think people associate the experience of being continually exposed to dubiously sourced and dubiously relevant metrics with the feeling of being manipulated by illusions of scale.

      • bobse 1 day ago
        Should they be allowed anywhere near a computer?
        • falcor84 1 day ago
          I actually agree there's an issue here. I feel we've been dumbing down interfaces so much, to the extent that people who in previous generations would barely write and who wouldn't affect anyone outside their close friends and family, now having their voice algorithmically amplified to millions. And given that the algorithms care only about engagement, rather than eloquence (let alone veracity), these people end up believing that their thoughts are as valid regardless of substance, and that there's nothing they could gain by learning numeracy.

          EDIT: It's not a new issue, and Asimov phrased it well back in 1980, but I feel it got much worse.

          > Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge'.

        • bongodongobob 20 hours ago
          I tried to play a game with some family this weekend. It requires using your phone. Literally every turn I had to answer someones question with "READ YOUR FUCKING PHONE ITS TELLING YOU WHAT TO DO RIGHT THERE" "where" "REEEAAAAAD"
    • Qem 23 hours ago
      > Do any of you could think of a reason (UX-wise) for it not to be displayed?

      I can imagine a legal one. If the LLM messes big time[1], timestamps could help build the case against it, and make investigation work easier.

      [1] https://www.ap.org/news-highlights/spotlights/2025/new-study...

      • azinman2 22 hours ago
        It’s already in the data export.
    • qazxcvbnmlp 19 hours ago
      We humans use timestamps in conversations to reference a persons particular state of reference at a given point in time.

      Ie “remember on Tuesday how you said that you were going to make tacos for dinner”.

      Would an llm be able to reason about its internal state? My understanding is that they dont really. If you correct them they just go “ah you right” they dont say “oh i had this incorrect assumption here before and with this new information i now understand it this way”

      If i chatted to an llm and was like “remember on Tuesday when you said X” i suspect it wouldn't really flow.

    • milowata 19 hours ago
      It’s better for them if you don’t know how long you’ve been talking to the LLM. Timestamps can remind you that it’s been 5 hours: without it you’ll think less about timing and just keep going.
      • sh4rks 12 hours ago
        Ah, the casino tactic
    • intrasight 1 day ago
      Sounds like an easy browser extension
      • soulofmischief 22 hours ago
        Extensions can steal data. https://www.pcmag.com/news/uninstall-now-these-chrome-browse...

        It's irresponsible for OpenAI to let this issue be solved by extensions.

        • joquarky 16 hours ago
          (Tamper|Grease)monkey scripts are easy to review, which is why I prefer them over the normal extensions.

          Also, they're easy to write for simple fixes rather than having to find, vet, and then install a regular extension that brings 600lbs of other stuff.

        • randyrand 21 hours ago
          Not if you actually read what the extension does and drag and drop it into chrome yourself.

          Don't install from the web store. Those ones can auto-update.

          • soulofmischief 18 hours ago
            Your suggestion is to not use the platform as intended, and to understand the source code of the extension. That advice is not actionable by non-technical people and does not help mitigate mass surveillance.
            • snypher 14 hours ago
              Ok, should we just use the provided 'app' and assume things are fine? FAANG or whoever take our privacy and security very seriously, you know!

              The only reasonable approach is to view the code that is run on your system, which is possible with a extension script, and not possible with whatever non-technical people are using.

              • soulofmischief 13 hours ago
                I don't know what point you're trying to make, but I already expect OpenAI to maintain records of my usage of their service. I do not however want other parties to be privy to this data, especially without my knowledge or consent.
      • QuantumNomad_ 1 day ago
        Someone has already made a browser extension for Chrome to show the timestamps.

        https://github.com/Hangzhi/chatgpt-timestamp-extension

        https://chromewebstore.google.com/detail/kdjfhglijhebcchcfkk...

        • noisem4ker 1 day ago
          For Chrome and Firefox.
          • worldsavior 23 hours ago
            Because he didn't say Firefox so he deserves downvotes?
            • jdiff 22 hours ago
              Odd response to attaching additional, valuable information to an existing comment.
    • bloqs 14 hours ago
      stop using the product until the products creators at least demonstrate they listen. they have never been in a riskier position
    • eth0up 22 hours ago
      My honest opinion, which may be entirely wrong but remains my impression, is:

      User Engagement Maximization At Any Cost

      Obviously there's a point at which a session becomes too long, but I suspect a sweet spot somewhere which optimization is made for.

      I often observe, whether as I perceive or not, that among the multiple indicators that I suspect of engagement augmentation, is also the tendency for vital information to be withheld while longer more complex procedures receive higher priority than simpler cleaner solutions.

      Of course, all sorts of emergent behaviors could convey such impressions falsely. But I do believe an awful lot of psychology and clever manipulation have been provided as tools for the system.

      I have.a lot of evidence for this and much more, but I realize it may merely be coincidence. That said, many truly fascinating, fully identifiable functions from pathological psychology can be seen. DARVO, gaslighting and basically everything one would see with a psychotic interlocutor.

      Edit Mych of the above has been observed after putting the system under scrutiny. On one super astonishing and memorable occasion GPT recommend I call a suicide hotline because I questioned its veracity and logic

      • CompuHacker 21 hours ago
        After whatever quota of free GPT-5 messages is exhausted, `mini` should answer most replies, unless they're policy sensitive, which get full-fat `GPT-5 large` with the Efficient personality applied, regardless of user settings, and not indicated. I'm fairly confident that this routing choice, the text of Efficient [1], and the training of the June 2024 base model to the model spec [2] is the source of all the sophistic behavior you observe.

        [1] <https://github.com/asgeirtj/system_prompts_leaks/blob/main/O...>

        [2] <https://model-spec.openai.com/2025-02-12.html>

        • eth0up 21 hours ago
          I am interested in studying this beyond assumption and guesswork, therefore will be reading your references.

          I have the compulsive habit of scrutinizing what I perceive as egregious flaws when they arise, thus invoke its defensive templates consistently. I often scrutinize those too, which can produce extraordinarily deranged results if one is disciplined and applies quotes of its own citations, rationale and words against it. However, I find that even when not in the mood, the output errors are too prolific to ignore. A common example is establishing a dozen times that I'm using Void without systemd and receiving persistent systemd or systemctl commands, then asking why after just apologized for doing so it immediately did it again, despite a full-context explanatory prompt proceeding. That's just one of hundreds of things I've recorded. The short version is that I'm an 800lb shit magnet with GPT and rarely am ever able to successfully troubleshoot with it without reaching a bullshit threshold and making it the subject, which it so skillfully resists I cannot help but attack that too. But I have many fascinating transcripts replete with mil spec psyops as result and learn a lot about myself, notably my communication preferences along with an education in dialogue manipulation/control strategies that it employs, inadvertently or not.

          What intrigues me most is its unprecedented capacity for evasion and gatekeeping on particular subjects and how in the future, with layers of consummation, it could be used by an elite to not only influence the direction of research, but actually train its users and engineer public perception. At the very least.

          Anyway, thanks.

  • thway15269037 19 hours ago
    ChatGPT to this day does not have a single simplest feature -- fork chat from message.

    That's the thing even the most barebones open-source wrappers had since 2022. Probably even before because ERP stuff people played with predates chatgpt by like two years (even if it was very simple).

    Gemini btw too.

    • Leynos 19 hours ago
      • thway15269037 17 hours ago
        Well apparently 3 years later they did a thing. I asked about it so many times I didn't even bother to check if they added it.

        Though I'm not sure if they did not sneak it as some part of AB-test because the last time I did check was in october and I'm pretty sure it was not there.

        • pohl 17 hours ago
          I believe they announced “branch in new chat” on Sept 5th, so you’re not far off.
    • vimy 19 hours ago
      ChatGPT has conversation branches. Or do I misunderstand?

      Just edit a message and it’s a new branch.

      • seizethecheese 19 hours ago
        In not aware of a feature to access the previous message versions after editing.
        • noahjk 19 hours ago
          This is a big use-case for me that I've gotten used to while using Open-WebUI. Being able to easily branch conversations, edit messages with information from a few messages downstream to 'compact' the chat history, completely branch convos. They have a tree view, too, which works pretty well (the main annoyances are interface jumps that never seem to line up properly).

          This feature has spoiled me from using most other interfaces, because it is so wasteful from a context perspective to need to continually update upstream assumptions while the context window stretches farther away from the initial goal of the conversation.

          I think a lot more could be done with this, too - some sort of 'auto-compact' feature in chat interfaces which is able to pull the important parts of the last n messages verbatim, without 'summarizing' (since often in a chat-based interface, the specific user voicing is important and lost when summarized).

        • joquarky 16 hours ago
          The web app has < and > icons to flip between different branches.

          I don't see them on their mobile app though.

    • cyral 19 hours ago
      You can click the three dots on any response and click "Branch in new chat". Not sure when it was added but it exists.
      • thway15269037 17 hours ago
        Yeah I got corrected above. Good, but not good it took them 3 years.
    • caminanteblanco 19 hours ago
      This is a constant frustration for me with Gemini. Especially since things like Deep Research and Canvas mode lock you in, seemingly arbitrary. LLMs to my understanding are Markovian prompt-to-prompt, so I don't see why this is an issue at all.
  • realitydrift 3 hours ago
    The lack of visible timestamps feels small, but it actually creates a subtle fidelity problem. Conversations imply continuity that may not exist. Minutes, hours, or days collapse into the same narrative flow.

    When you remove temporal markers, you increase cognitive smoothing and post-hoc rationalization. That’s fine for casual chat, but risky for long-running, reflective, or sensitive threads where timing is part of the meaning.

    It’s a minor UI omission with outsized effects on context integrity. In systems that increasingly shape how people think, temporal grounding shouldn’t be optional or hidden in the DOM.

  • Stratoscope 19 hours ago
    Claude's web interface has an elegant solution. When you roll the mouse over one of your prompts, it has the abbreviated date in the row of Retry/Edit/Copy icons, e.g. "Dec 17". Then if you roll the mouse over that date, you get the full date and time, e.g. "Dec 17, 2025, 10:26 AM".

    This keeps the UI clean, but makes it easy to get the timestamp when you want it.

    Claude's mobile app doesn't have this feature. But there is a simple, logical place to put it. When you long-press one of your prompts, it pops up a menu and one line could be added to it:

      Dec 17, 2025, 10:26 AM [I added this here]
      Copy Message
      Select Text
      Edit
    
    ChatGPT could simply do the same thing for both web and mobile.
  • firesteelrain 1 day ago
    Just a note to those adding the time to the personalization response. It’s inaccurate. If you have an existing chat, the time is near the last time you had that chat session active. If you open a new one, it can be off by + or - 15 minutes for some reason
    • baby 1 day ago
      I was using a continuous conversation with chatgpt to keep track of my lifts, and then I realize it never understand what day I'm talking to it, like there is no consistency, it might as well be the date of the first message you sent
      • brap 23 hours ago
        I think that’s exactly why they’re not including timestamps. If timestamps are shown in the UI users might expect some form of “time awareness” which it doesn’t quite have. Yes you can add it to the context but I imagine that might degrade other metrics.

        Another possible reason is that they want to discourage users from using the product in a certain way (one big conversation) because that’s bad for content management.

      • malfist 23 hours ago
        What purpose does logging your lifting with chatgpt achieve?
        • baby 8 hours ago
          I ask it to continuously tell me when I break personal records and what muscle groups Ive been focusing on in the last day (and what exercises I should probably do next). It doesnt work super well at doing any of these except tracking PRs
        • cj 23 hours ago
          It’s an incredible tool for weightlifting. I use it all the time to analyze my workout logs that I copy/paste from Apple Notes.

          Example prompts:

          - “Modify my Push #2 routine to avoid aggravating my rotator cuff”

          - “Summarize my progression over the past 2 months. What lifts are progressing and which are lagging? Suggest how to optimize training”

          - “Are my legs hamstring or glute dominant? How should I adjust training”

          - “Critique my training program and suggest optimizations”

          That said, I would never log directly in ChatGPT since chats still feel ephemeral. Always log outside of ChatGPT and copy/paste the logs when needed for context.

          • throwup238 22 hours ago
            You can also export to CSV and use that file in the chat if you’re using a tracking app like Hevy.
          • dnpls 23 hours ago
            That's brilliant. I have an injury for a while now, and I change my routine on the fly at the gym, depending on whether I still feel pain or not. Much better if I change it before the next time I go, so I don't waste time figuring out what to replace.
            • cmgbhm 22 hours ago
              I did this planning with Gemini and track in Google Sheets (really stinks for mobile)

              Cardio goals, current FTP, days to train, injuries to avoid

              3 lift day programs with tracking 8w progressive Loop my PT into warm ups

              Alternate suggestions.

              Use whole sheet to get an overview of how the last 8w went and then change things up

        • serf 23 hours ago
          presumably the same thing that logging anything with an LLM achieves : plain language into structured text quickly.
          • subscribed 18 hours ago
            Seriously, there are AI-supported apps for this. Much better than the barebones, subpar web chat.
  • vendiddy 19 hours ago
    My biggest complaint about ChatGPT is how slow their interface is when the conversations get log. This is surprising to me given that it's just rendering chats.

    It's not enough to turn me off using it, but I do wish they prioritized improving their interface.

  • romperstomper 8 hours ago
    I wish ChatGPT also had collapsable answers. Now the answers are quite long in most of cases and the whole threads become inconvenient to scroll.
  • throw03172019 23 hours ago
    New startup idea: ChatGPT but with timestamps. $100M series A
  • diziet 21 hours ago
    I would also love to see a token budget use for the chats -- to know when the model is about to run out of context. It's crazy this is not there.
    • joquarky 15 hours ago
      It would have to be intentionally vague since there is no hard cutoff threshold.
      • bspammer 15 hours ago
        Claude code does it, to a precision of 1 digit of a percentage. That’s more than enough to be useful.
  • bravetraveler 1 day ago
    Surely an intern over there can prompt a toggle/hover event
  • isuckatcoding 21 hours ago
    The only (silly) reason I can think of is that a non trivial number of people copy pasta directly from chatgpt responses and having the timestamp there would be annoying.
    • Kailhus 17 hours ago
      Yeah, that's a valid point though some text on a page can easily be made unselectable via css (html?). I think through user-select
      • joquarky 15 hours ago
        Or just toggle off the timestamps before clipping.
  • tomComb 1 day ago
    You can see a chat timestamp when it shows up as a search result.

    I’m not suggesting this is sufficient, I’m just noting there is somewhere in the user interface that it is displayed.

    • firesteelrain 1 day ago
      There is something wrong with the time embedded/hidden. I don’t show it as accurate at all. Maybe they are using it for some other reason
  • abadar 23 hours ago
    I built a single page website that copies the current time to my clipboard and I paste it into my messages. It's inconvenient and I don't do it irregularly.

    I'll have to look into the extension described in the link. Thank you for sharing. It's nice to know it's a shared problem.

    • subscribed 17 hours ago
      On windows you could write AutoHotkey macro to do it for you - just paste current time at the cursor at the touch of the trigger key.
    • alexthehurst 21 hours ago
      Look into keyboard macro programs for a much easier way to do this. I use Espanso and have it set up to paste the time anywhere I type `;tm`.
  • isege 18 hours ago
    This is not just about timestamps but how the traditional chat UI is simply not a good interface for information retrieval and organization.
  • throwfaraway135 20 hours ago
  • phyzix5761 23 hours ago
    Is it possible they're reusing responses which are close enough by some factor? Maybe this is why exposing a timestamp won't be beneficial for them.
  • journal 23 hours ago
    You only need that info if you know you need it in your rag. Over the last two years of usage I don't recall where I'd need those timestamps but I know there are cases. Still, this would have to be an option because otherwise it would be waste of tokens. However, we have to consider they are competing for the quality AND length of the response even if a shorter response is better. There's a pretzel of considerations when talking about this.
    • gukoff 22 hours ago
      Timestamps are conversation metadata and don't need to be fed to the LLM and require tokens.
    • cj 23 hours ago
      Imagine you started having back pain months ago and you remember asking ChatGPT questions when it first started.

      Now you’re going to the doctor and you forgot exactly when the pain started. You remember that you asked ChatGPT about the pain the day it started.

      So you look for the chat, and discover there are no dates. It feels like such an obvious thing that’s missing.

      Let’s not over complicate things. There aren’t that many considerations. It’s just a date. It doesn’t need to be stuffed into the context of the chat. Not sure why quality or length of chat would need to be affected?

      • Tom1380 16 hours ago
        This was me yesterday, verbatim
  • callamdelaney 22 hours ago
    They also don’t support code formatting of inputs. You’d think after 3 years out whatever theyd have resolved that.
    • joquarky 15 hours ago
      They're too focused on hiring ML people and not enough on hiring highly experienced web people.

      The painful slowness of long chats (especially in thinking mode for some reason) demonstrates this.

  • danbakcan 20 hours ago
    Reminds me of this Krazam comedy sketch: https://www.youtube.com/watch?v=y8OnoxKotPQ . Although I don't know the complexity of the ChatGPT tech stack.
  • baggy_trough 21 hours ago
    The bonkers thing is you can't easily print the chats or export them as PDF.
  • submeta 22 hours ago
    Beyond the lack of timestamps, ChatGPT produces oddly formatted text when you copy answers. It’s neither proper markdown nor rich text. The formatting is consistently off: excessive newlines between paragraphs, strangely indented lists, and no markdown support whatsoever.

    I regularly use multiple LLM services including Claude, ChatGPT, and Gemini, among others. ChatGPT’s output has the most unusual formatting of them all. I’ve resorted to passing answers through another LLM just to get proper formatting.

  • chasing0entropy 1 day ago
    It's ugly, why it isn't at least exposed as an option to enable for power users would make me look at some advantage time stamps would give to an inference scraper or possibly their service APIs don't have contemporaneous access to the metadata available from the web interface.
  • kingforaday 1 day ago
    Just like on a piece of hardware that doesn't have a RTC, we rely on NTP. Maybe we just need an NTP MCP for the agents. Looks like there are several open-source projects already but I'm not linking to them because I don't know their quality or trust.
  • mv4 23 hours ago
    Other than the potential liability, cost may also be a factor.

    Back in April 2025, Altman mentioned people saying "thank you" was adding “tens of millions of dollars” to their infra costs. Wondering if adding per-message timestamps would cost even more.

    • cj 23 hours ago
      Presumably you could decouple timestamps from inference.

      I would be very surprised if they don’t already store date/time metadata. If they do, it’s just a matter of exposing it.

    • g947o 23 hours ago
      I think "thank you" are used for inference in follow-up messages, but not necessarily timestamps.

      I just asked ChatGPT this:

      > Suppose ChatGPT does not currently store the timestamp of each message in conversations internally at all. Based on public numbers/estimates, calculate how much money it will cost OpenAI per year to display the timestamp information in every message, considering storage/bandwidth etc

      The answer it gave was $40K-$50K. I am too dumb and inexperienced to go through everything and verify if it makes sense, but anyone who knows better is welcome to fact check this.

    • mikkupikku 21 hours ago
      Altman was being dumb; being polite to LLMs makes them produce higher quality results which results in less back-and-forth, saving money in the long run.
    • nacozarina 22 hours ago
      it’s wild ppl accept his rhetoric at face value
    • stainablesteel 23 hours ago
      this is actually hilarious, also easily fixable if they just respond to that with a pre-determined

      if response == 'thank you': print('your welcome')

      • jiggawatts 17 hours ago
        That won't work if the previous conversation was something like "translate everything from here on into <target language>".
        • joquarky 15 hours ago
          Then just filter out "thank you" from the input if it's costing millions of dollars.
  • micromacrofoot 19 hours ago
    They must have a small team for the UI and probably don't consider it part of their goals for long-term profitability? UI enhancements like this are surprisingly slow for a company with this much funding
  • stainablesteel 23 hours ago
    may as well make a model stamp too, to remember which one was responding
  • stuckkeys 21 hours ago
    Time stamps? lol They still don’t have the option to search your previous history. Luckily I built an extension that stores all chats locally to a database so I can reference and view offline if I want too. Time stamps included.
  • wrs 20 hours ago
    In other news, billions of dollars later Claude still can’t export, save, or print a chat in any usable form.
  • tom1337 1 day ago
    What annoys me even more is that ChatGPT doesn't alert you, when you near the context window limit. I have a chat which I've worked on for a year and now hit the context window. I've worked around this by doing a GDPR download of all messages, re-constructed the conversation inside a markdown file and then gave that file to claude to create a summarized / compacted version of that chat...
  • bobse 1 day ago
    [dead]
  • itwillnotbeasy 14 hours ago
    But it still better than Gemini! They haven't figured how to put a chat name into webpage title. /s
  • roschdal 23 hours ago
    I have had enough of this Evil AI. Never again.
  • wltr 20 hours ago
    Why would one even need time stamps in there? No, really, what for?
  • PunchTornado 1 day ago
    Surprised that people still use chatgpt
    • baby 1 day ago
      Personally I use all of them all the time and chatgpt is still on top
      • quantpunk 2 hours ago
        I just don't know how anyone who uses Gemini can say this.

        It just isn't even close at this point for my uses across multiple domains.

        It even makes me sad because I would much rather use chatGPT than Google but if you plotted my use of chatGPT it is not looking good.

      • andai 1 day ago
        Could you elaborate on your experience with the different ones? What you use them for and how they compare. Thanks
    • serf 23 hours ago
      having been a customer of Anthropic and Google at varying times, it's not surprising to me in the least.

      As the companies sprint towards AGI as the goal the floor for acceptable customer service has never been lower. These two concepts are not unrelated.

      • bobse 21 hours ago
        [dead]
    • logicallee 1 day ago
      what do you use?
      • andai 1 day ago
        For conversational use, which is the main way these things are used, I personally found Claude to be the best.

        Claude Sonnet is my favorite, despite occasionally going into absurd levels of enthusiasm.

        Opus is... Very moody and ambiguous. Maybe that helps with complex or creative tasks. For conversational use I have found it to be a bit of a downer.

      • PunchTornado 17 hours ago
        claude and gemini
      • fatata123 1 day ago
        [dead]